Test Report: KVM_Linux_crio 21724

                    
                      cdde98f5260d5cfb20fef0dee46a24863d2037a7:2025-10-13:41893
                    
                

Test fail (13/324)

x
+
TestAddons/parallel/Ingress (161.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-323324 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-323324 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-323324 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [485bbbff-1382-46c5-a272-230368cf2188] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [485bbbff-1382-46c5-a272-230368cf2188] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.005889786s
I1013 21:21:39.369557   19947 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-323324 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-323324 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.807837677s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-323324 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-323324 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.156
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-323324 -n addons-323324
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-323324 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-323324 logs -n 25: (1.642441591s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                ARGS                                                                                                                                                                                                                                                │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-938231                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-938231 │ jenkins │ v1.37.0 │ 13 Oct 25 21:17 UTC │ 13 Oct 25 21:17 UTC │
	│ start   │ --download-only -p binary-mirror-576009 --alsologtostderr --binary-mirror http://127.0.0.1:46135 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-576009 │ jenkins │ v1.37.0 │ 13 Oct 25 21:17 UTC │                     │
	│ delete  │ -p binary-mirror-576009                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ binary-mirror-576009 │ jenkins │ v1.37.0 │ 13 Oct 25 21:18 UTC │ 13 Oct 25 21:18 UTC │
	│ addons  │ enable dashboard -p addons-323324                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-323324        │ jenkins │ v1.37.0 │ 13 Oct 25 21:18 UTC │                     │
	│ addons  │ disable dashboard -p addons-323324                                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-323324        │ jenkins │ v1.37.0 │ 13 Oct 25 21:18 UTC │                     │
	│ start   │ -p addons-323324 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-323324        │ jenkins │ v1.37.0 │ 13 Oct 25 21:18 UTC │ 13 Oct 25 21:20 UTC │
	│ addons  │ addons-323324 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-323324        │ jenkins │ v1.37.0 │ 13 Oct 25 21:20 UTC │ 13 Oct 25 21:20 UTC │
	│ addons  │ addons-323324 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-323324        │ jenkins │ v1.37.0 │ 13 Oct 25 21:20 UTC │ 13 Oct 25 21:21 UTC │
	│ addons  │ enable headlamp -p addons-323324 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-323324        │ jenkins │ v1.37.0 │ 13 Oct 25 21:21 UTC │ 13 Oct 25 21:21 UTC │
	│ addons  │ addons-323324 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-323324        │ jenkins │ v1.37.0 │ 13 Oct 25 21:21 UTC │ 13 Oct 25 21:21 UTC │
	│ addons  │ addons-323324 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-323324        │ jenkins │ v1.37.0 │ 13 Oct 25 21:21 UTC │ 13 Oct 25 21:21 UTC │
	│ addons  │ addons-323324 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-323324        │ jenkins │ v1.37.0 │ 13 Oct 25 21:21 UTC │ 13 Oct 25 21:21 UTC │
	│ addons  │ addons-323324 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-323324        │ jenkins │ v1.37.0 │ 13 Oct 25 21:21 UTC │ 13 Oct 25 21:21 UTC │
	│ ip      │ addons-323324 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-323324        │ jenkins │ v1.37.0 │ 13 Oct 25 21:21 UTC │ 13 Oct 25 21:21 UTC │
	│ addons  │ addons-323324 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-323324        │ jenkins │ v1.37.0 │ 13 Oct 25 21:21 UTC │ 13 Oct 25 21:21 UTC │
	│ addons  │ addons-323324 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-323324        │ jenkins │ v1.37.0 │ 13 Oct 25 21:21 UTC │ 13 Oct 25 21:21 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-323324                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-323324        │ jenkins │ v1.37.0 │ 13 Oct 25 21:21 UTC │ 13 Oct 25 21:21 UTC │
	│ addons  │ addons-323324 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-323324        │ jenkins │ v1.37.0 │ 13 Oct 25 21:21 UTC │ 13 Oct 25 21:21 UTC │
	│ ssh     │ addons-323324 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-323324        │ jenkins │ v1.37.0 │ 13 Oct 25 21:21 UTC │                     │
	│ addons  │ addons-323324 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-323324        │ jenkins │ v1.37.0 │ 13 Oct 25 21:21 UTC │ 13 Oct 25 21:21 UTC │
	│ ssh     │ addons-323324 ssh cat /opt/local-path-provisioner/pvc-b4ec6a44-54cb-4cec-ad26-77ce732a0da9_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                                                  │ addons-323324        │ jenkins │ v1.37.0 │ 13 Oct 25 21:21 UTC │ 13 Oct 25 21:21 UTC │
	│ addons  │ addons-323324 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-323324        │ jenkins │ v1.37.0 │ 13 Oct 25 21:21 UTC │ 13 Oct 25 21:21 UTC │
	│ addons  │ addons-323324 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-323324        │ jenkins │ v1.37.0 │ 13 Oct 25 21:21 UTC │ 13 Oct 25 21:21 UTC │
	│ addons  │ addons-323324 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-323324        │ jenkins │ v1.37.0 │ 13 Oct 25 21:21 UTC │ 13 Oct 25 21:22 UTC │
	│ ip      │ addons-323324 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-323324        │ jenkins │ v1.37.0 │ 13 Oct 25 21:23 UTC │ 13 Oct 25 21:23 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 21:18:00
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 21:18:00.175825   20588 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:18:00.176041   20588 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:18:00.176049   20588 out.go:374] Setting ErrFile to fd 2...
	I1013 21:18:00.176054   20588 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:18:00.176276   20588 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-15625/.minikube/bin
	I1013 21:18:00.176773   20588 out.go:368] Setting JSON to false
	I1013 21:18:00.177537   20588 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3628,"bootTime":1760386652,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 21:18:00.177623   20588 start.go:141] virtualization: kvm guest
	I1013 21:18:00.179418   20588 out.go:179] * [addons-323324] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 21:18:00.180730   20588 notify.go:220] Checking for updates...
	I1013 21:18:00.180741   20588 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 21:18:00.182028   20588 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 21:18:00.183375   20588 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-15625/kubeconfig
	I1013 21:18:00.184793   20588 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-15625/.minikube
	I1013 21:18:00.186203   20588 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 21:18:00.187591   20588 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 21:18:00.188872   20588 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 21:18:00.218246   20588 out.go:179] * Using the kvm2 driver based on user configuration
	I1013 21:18:00.219385   20588 start.go:305] selected driver: kvm2
	I1013 21:18:00.219400   20588 start.go:925] validating driver "kvm2" against <nil>
	I1013 21:18:00.219409   20588 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 21:18:00.220036   20588 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 21:18:00.220112   20588 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-15625/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 21:18:00.233577   20588 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 21:18:00.233607   20588 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-15625/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 21:18:00.246787   20588 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 21:18:00.246843   20588 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 21:18:00.247140   20588 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 21:18:00.247245   20588 cni.go:84] Creating CNI manager for ""
	I1013 21:18:00.247296   20588 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1013 21:18:00.247308   20588 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1013 21:18:00.247361   20588 start.go:349] cluster config:
	{Name:addons-323324 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-323324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1013 21:18:00.247460   20588 iso.go:125] acquiring lock: {Name:mkb744e09089d0ab8a5ae3294003cf719d380bf8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 21:18:00.249361   20588 out.go:179] * Starting "addons-323324" primary control-plane node in "addons-323324" cluster
	I1013 21:18:00.250859   20588 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 21:18:00.250900   20588 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-15625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1013 21:18:00.250912   20588 cache.go:58] Caching tarball of preloaded images
	I1013 21:18:00.251003   20588 preload.go:233] Found /home/jenkins/minikube-integration/21724-15625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1013 21:18:00.251015   20588 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 21:18:00.251361   20588 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/config.json ...
	I1013 21:18:00.251387   20588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/config.json: {Name:mk02c1720a10f336d0f96780bbdfde845b6a85a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:18:00.251522   20588 start.go:360] acquireMachinesLock for addons-323324: {Name:mk81e7d45b6c30d879e4077cd05b64f26ced767a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1013 21:18:00.251569   20588 start.go:364] duration metric: took 34.18µs to acquireMachinesLock for "addons-323324"
	I1013 21:18:00.251586   20588 start.go:93] Provisioning new machine with config: &{Name:addons-323324 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-323324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 21:18:00.251640   20588 start.go:125] createHost starting for "" (driver="kvm2")
	I1013 21:18:00.253415   20588 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1013 21:18:00.253578   20588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:18:00.253633   20588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:18:00.266914   20588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37307
	I1013 21:18:00.267396   20588 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:18:00.267941   20588 main.go:141] libmachine: Using API Version  1
	I1013 21:18:00.267995   20588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:18:00.268396   20588 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:18:00.268608   20588 main.go:141] libmachine: (addons-323324) Calling .GetMachineName
	I1013 21:18:00.268743   20588 main.go:141] libmachine: (addons-323324) Calling .DriverName
	I1013 21:18:00.268904   20588 start.go:159] libmachine.API.Create for "addons-323324" (driver="kvm2")
	I1013 21:18:00.268923   20588 client.go:168] LocalClient.Create starting
	I1013 21:18:00.268963   20588 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem
	I1013 21:18:00.588121   20588 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/cert.pem
	I1013 21:18:00.630274   20588 main.go:141] libmachine: Running pre-create checks...
	I1013 21:18:00.630295   20588 main.go:141] libmachine: (addons-323324) Calling .PreCreateCheck
	I1013 21:18:00.630777   20588 main.go:141] libmachine: (addons-323324) Calling .GetConfigRaw
	I1013 21:18:00.631276   20588 main.go:141] libmachine: Creating machine...
	I1013 21:18:00.631292   20588 main.go:141] libmachine: (addons-323324) Calling .Create
	I1013 21:18:00.631441   20588 main.go:141] libmachine: (addons-323324) creating domain...
	I1013 21:18:00.631460   20588 main.go:141] libmachine: (addons-323324) creating network...
	I1013 21:18:00.632992   20588 main.go:141] libmachine: (addons-323324) DBG | found existing default network
	I1013 21:18:00.633187   20588 main.go:141] libmachine: (addons-323324) DBG | <network>
	I1013 21:18:00.633211   20588 main.go:141] libmachine: (addons-323324) DBG |   <name>default</name>
	I1013 21:18:00.633223   20588 main.go:141] libmachine: (addons-323324) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1013 21:18:00.633236   20588 main.go:141] libmachine: (addons-323324) DBG |   <forward mode='nat'>
	I1013 21:18:00.633256   20588 main.go:141] libmachine: (addons-323324) DBG |     <nat>
	I1013 21:18:00.633275   20588 main.go:141] libmachine: (addons-323324) DBG |       <port start='1024' end='65535'/>
	I1013 21:18:00.633282   20588 main.go:141] libmachine: (addons-323324) DBG |     </nat>
	I1013 21:18:00.633287   20588 main.go:141] libmachine: (addons-323324) DBG |   </forward>
	I1013 21:18:00.633294   20588 main.go:141] libmachine: (addons-323324) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1013 21:18:00.633299   20588 main.go:141] libmachine: (addons-323324) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1013 21:18:00.633305   20588 main.go:141] libmachine: (addons-323324) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1013 21:18:00.633308   20588 main.go:141] libmachine: (addons-323324) DBG |     <dhcp>
	I1013 21:18:00.633326   20588 main.go:141] libmachine: (addons-323324) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1013 21:18:00.633333   20588 main.go:141] libmachine: (addons-323324) DBG |     </dhcp>
	I1013 21:18:00.633360   20588 main.go:141] libmachine: (addons-323324) DBG |   </ip>
	I1013 21:18:00.633375   20588 main.go:141] libmachine: (addons-323324) DBG | </network>
	I1013 21:18:00.633383   20588 main.go:141] libmachine: (addons-323324) DBG | 
	I1013 21:18:00.633946   20588 main.go:141] libmachine: (addons-323324) DBG | I1013 21:18:00.633784   20616 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000136b0}
	I1013 21:18:00.633979   20588 main.go:141] libmachine: (addons-323324) DBG | defining private network:
	I1013 21:18:00.633993   20588 main.go:141] libmachine: (addons-323324) DBG | 
	I1013 21:18:00.634000   20588 main.go:141] libmachine: (addons-323324) DBG | <network>
	I1013 21:18:00.634009   20588 main.go:141] libmachine: (addons-323324) DBG |   <name>mk-addons-323324</name>
	I1013 21:18:00.634016   20588 main.go:141] libmachine: (addons-323324) DBG |   <dns enable='no'/>
	I1013 21:18:00.634029   20588 main.go:141] libmachine: (addons-323324) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1013 21:18:00.634037   20588 main.go:141] libmachine: (addons-323324) DBG |     <dhcp>
	I1013 21:18:00.634047   20588 main.go:141] libmachine: (addons-323324) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1013 21:18:00.634058   20588 main.go:141] libmachine: (addons-323324) DBG |     </dhcp>
	I1013 21:18:00.634066   20588 main.go:141] libmachine: (addons-323324) DBG |   </ip>
	I1013 21:18:00.634076   20588 main.go:141] libmachine: (addons-323324) DBG | </network>
	I1013 21:18:00.634085   20588 main.go:141] libmachine: (addons-323324) DBG | 
	I1013 21:18:00.639962   20588 main.go:141] libmachine: (addons-323324) DBG | creating private network mk-addons-323324 192.168.39.0/24...
	I1013 21:18:00.704334   20588 main.go:141] libmachine: (addons-323324) DBG | private network mk-addons-323324 192.168.39.0/24 created
	I1013 21:18:00.704607   20588 main.go:141] libmachine: (addons-323324) DBG | <network>
	I1013 21:18:00.704646   20588 main.go:141] libmachine: (addons-323324) setting up store path in /home/jenkins/minikube-integration/21724-15625/.minikube/machines/addons-323324 ...
	I1013 21:18:00.704656   20588 main.go:141] libmachine: (addons-323324) DBG |   <name>mk-addons-323324</name>
	I1013 21:18:00.704669   20588 main.go:141] libmachine: (addons-323324) DBG |   <uuid>a6e7f0be-d2eb-444d-a88b-a8795a5593cb</uuid>
	I1013 21:18:00.704678   20588 main.go:141] libmachine: (addons-323324) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I1013 21:18:00.704714   20588 main.go:141] libmachine: (addons-323324) building disk image from file:///home/jenkins/minikube-integration/21724-15625/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1013 21:18:00.704731   20588 main.go:141] libmachine: (addons-323324) DBG |   <mac address='52:54:00:91:06:7a'/>
	I1013 21:18:00.704747   20588 main.go:141] libmachine: (addons-323324) Downloading /home/jenkins/minikube-integration/21724-15625/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21724-15625/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I1013 21:18:00.704803   20588 main.go:141] libmachine: (addons-323324) DBG |   <dns enable='no'/>
	I1013 21:18:00.704831   20588 main.go:141] libmachine: (addons-323324) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1013 21:18:00.704846   20588 main.go:141] libmachine: (addons-323324) DBG |     <dhcp>
	I1013 21:18:00.704855   20588 main.go:141] libmachine: (addons-323324) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1013 21:18:00.704863   20588 main.go:141] libmachine: (addons-323324) DBG |     </dhcp>
	I1013 21:18:00.704870   20588 main.go:141] libmachine: (addons-323324) DBG |   </ip>
	I1013 21:18:00.704879   20588 main.go:141] libmachine: (addons-323324) DBG | </network>
	I1013 21:18:00.704884   20588 main.go:141] libmachine: (addons-323324) DBG | 
	I1013 21:18:00.704913   20588 main.go:141] libmachine: (addons-323324) DBG | I1013 21:18:00.704598   20616 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21724-15625/.minikube
	I1013 21:18:00.961301   20588 main.go:141] libmachine: (addons-323324) DBG | I1013 21:18:00.961149   20616 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21724-15625/.minikube/machines/addons-323324/id_rsa...
	I1013 21:18:01.360039   20588 main.go:141] libmachine: (addons-323324) DBG | I1013 21:18:01.359877   20616 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21724-15625/.minikube/machines/addons-323324/addons-323324.rawdisk...
	I1013 21:18:01.360060   20588 main.go:141] libmachine: (addons-323324) DBG | Writing magic tar header
	I1013 21:18:01.360070   20588 main.go:141] libmachine: (addons-323324) DBG | Writing SSH key tar header
	I1013 21:18:01.360078   20588 main.go:141] libmachine: (addons-323324) DBG | I1013 21:18:01.360012   20616 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21724-15625/.minikube/machines/addons-323324 ...
	I1013 21:18:01.360107   20588 main.go:141] libmachine: (addons-323324) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-15625/.minikube/machines/addons-323324
	I1013 21:18:01.360132   20588 main.go:141] libmachine: (addons-323324) setting executable bit set on /home/jenkins/minikube-integration/21724-15625/.minikube/machines/addons-323324 (perms=drwx------)
	I1013 21:18:01.360179   20588 main.go:141] libmachine: (addons-323324) setting executable bit set on /home/jenkins/minikube-integration/21724-15625/.minikube/machines (perms=drwxr-xr-x)
	I1013 21:18:01.360194   20588 main.go:141] libmachine: (addons-323324) setting executable bit set on /home/jenkins/minikube-integration/21724-15625/.minikube (perms=drwxr-xr-x)
	I1013 21:18:01.360203   20588 main.go:141] libmachine: (addons-323324) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-15625/.minikube/machines
	I1013 21:18:01.360212   20588 main.go:141] libmachine: (addons-323324) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-15625/.minikube
	I1013 21:18:01.360217   20588 main.go:141] libmachine: (addons-323324) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-15625
	I1013 21:18:01.360225   20588 main.go:141] libmachine: (addons-323324) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1013 21:18:01.360231   20588 main.go:141] libmachine: (addons-323324) DBG | checking permissions on dir: /home/jenkins
	I1013 21:18:01.360240   20588 main.go:141] libmachine: (addons-323324) DBG | checking permissions on dir: /home
	I1013 21:18:01.360244   20588 main.go:141] libmachine: (addons-323324) DBG | skipping /home - not owner
	I1013 21:18:01.360257   20588 main.go:141] libmachine: (addons-323324) setting executable bit set on /home/jenkins/minikube-integration/21724-15625 (perms=drwxrwxr-x)
	I1013 21:18:01.360263   20588 main.go:141] libmachine: (addons-323324) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1013 21:18:01.360270   20588 main.go:141] libmachine: (addons-323324) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1013 21:18:01.360274   20588 main.go:141] libmachine: (addons-323324) defining domain...
	I1013 21:18:01.361537   20588 main.go:141] libmachine: (addons-323324) defining domain using XML: 
	I1013 21:18:01.361591   20588 main.go:141] libmachine: (addons-323324) <domain type='kvm'>
	I1013 21:18:01.361604   20588 main.go:141] libmachine: (addons-323324)   <name>addons-323324</name>
	I1013 21:18:01.361618   20588 main.go:141] libmachine: (addons-323324)   <memory unit='MiB'>4096</memory>
	I1013 21:18:01.361629   20588 main.go:141] libmachine: (addons-323324)   <vcpu>2</vcpu>
	I1013 21:18:01.361637   20588 main.go:141] libmachine: (addons-323324)   <features>
	I1013 21:18:01.361646   20588 main.go:141] libmachine: (addons-323324)     <acpi/>
	I1013 21:18:01.361656   20588 main.go:141] libmachine: (addons-323324)     <apic/>
	I1013 21:18:01.361664   20588 main.go:141] libmachine: (addons-323324)     <pae/>
	I1013 21:18:01.361671   20588 main.go:141] libmachine: (addons-323324)   </features>
	I1013 21:18:01.361680   20588 main.go:141] libmachine: (addons-323324)   <cpu mode='host-passthrough'>
	I1013 21:18:01.361687   20588 main.go:141] libmachine: (addons-323324)   </cpu>
	I1013 21:18:01.361696   20588 main.go:141] libmachine: (addons-323324)   <os>
	I1013 21:18:01.361702   20588 main.go:141] libmachine: (addons-323324)     <type>hvm</type>
	I1013 21:18:01.361711   20588 main.go:141] libmachine: (addons-323324)     <boot dev='cdrom'/>
	I1013 21:18:01.361719   20588 main.go:141] libmachine: (addons-323324)     <boot dev='hd'/>
	I1013 21:18:01.361744   20588 main.go:141] libmachine: (addons-323324)     <bootmenu enable='no'/>
	I1013 21:18:01.361764   20588 main.go:141] libmachine: (addons-323324)   </os>
	I1013 21:18:01.361771   20588 main.go:141] libmachine: (addons-323324)   <devices>
	I1013 21:18:01.361776   20588 main.go:141] libmachine: (addons-323324)     <disk type='file' device='cdrom'>
	I1013 21:18:01.361785   20588 main.go:141] libmachine: (addons-323324)       <source file='/home/jenkins/minikube-integration/21724-15625/.minikube/machines/addons-323324/boot2docker.iso'/>
	I1013 21:18:01.361790   20588 main.go:141] libmachine: (addons-323324)       <target dev='hdc' bus='scsi'/>
	I1013 21:18:01.361796   20588 main.go:141] libmachine: (addons-323324)       <readonly/>
	I1013 21:18:01.361800   20588 main.go:141] libmachine: (addons-323324)     </disk>
	I1013 21:18:01.361832   20588 main.go:141] libmachine: (addons-323324)     <disk type='file' device='disk'>
	I1013 21:18:01.361850   20588 main.go:141] libmachine: (addons-323324)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1013 21:18:01.361859   20588 main.go:141] libmachine: (addons-323324)       <source file='/home/jenkins/minikube-integration/21724-15625/.minikube/machines/addons-323324/addons-323324.rawdisk'/>
	I1013 21:18:01.361868   20588 main.go:141] libmachine: (addons-323324)       <target dev='hda' bus='virtio'/>
	I1013 21:18:01.361873   20588 main.go:141] libmachine: (addons-323324)     </disk>
	I1013 21:18:01.361880   20588 main.go:141] libmachine: (addons-323324)     <interface type='network'>
	I1013 21:18:01.361893   20588 main.go:141] libmachine: (addons-323324)       <source network='mk-addons-323324'/>
	I1013 21:18:01.361898   20588 main.go:141] libmachine: (addons-323324)       <model type='virtio'/>
	I1013 21:18:01.361905   20588 main.go:141] libmachine: (addons-323324)     </interface>
	I1013 21:18:01.361910   20588 main.go:141] libmachine: (addons-323324)     <interface type='network'>
	I1013 21:18:01.361917   20588 main.go:141] libmachine: (addons-323324)       <source network='default'/>
	I1013 21:18:01.361921   20588 main.go:141] libmachine: (addons-323324)       <model type='virtio'/>
	I1013 21:18:01.361929   20588 main.go:141] libmachine: (addons-323324)     </interface>
	I1013 21:18:01.361933   20588 main.go:141] libmachine: (addons-323324)     <serial type='pty'>
	I1013 21:18:01.361941   20588 main.go:141] libmachine: (addons-323324)       <target port='0'/>
	I1013 21:18:01.361944   20588 main.go:141] libmachine: (addons-323324)     </serial>
	I1013 21:18:01.361949   20588 main.go:141] libmachine: (addons-323324)     <console type='pty'>
	I1013 21:18:01.361956   20588 main.go:141] libmachine: (addons-323324)       <target type='serial' port='0'/>
	I1013 21:18:01.361961   20588 main.go:141] libmachine: (addons-323324)     </console>
	I1013 21:18:01.361964   20588 main.go:141] libmachine: (addons-323324)     <rng model='virtio'>
	I1013 21:18:01.361970   20588 main.go:141] libmachine: (addons-323324)       <backend model='random'>/dev/random</backend>
	I1013 21:18:01.361978   20588 main.go:141] libmachine: (addons-323324)     </rng>
	I1013 21:18:01.361987   20588 main.go:141] libmachine: (addons-323324)   </devices>
	I1013 21:18:01.361990   20588 main.go:141] libmachine: (addons-323324) </domain>
	I1013 21:18:01.361997   20588 main.go:141] libmachine: (addons-323324) 
	I1013 21:18:01.369496   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:e2:70:c0 in network default
	I1013 21:18:01.370066   20588 main.go:141] libmachine: (addons-323324) starting domain...
	I1013 21:18:01.370078   20588 main.go:141] libmachine: (addons-323324) ensuring networks are active...
	I1013 21:18:01.370086   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:01.370718   20588 main.go:141] libmachine: (addons-323324) Ensuring network default is active
	I1013 21:18:01.371085   20588 main.go:141] libmachine: (addons-323324) Ensuring network mk-addons-323324 is active
	I1013 21:18:01.372515   20588 main.go:141] libmachine: (addons-323324) getting domain XML...
	I1013 21:18:01.373583   20588 main.go:141] libmachine: (addons-323324) DBG | starting domain XML:
	I1013 21:18:01.373605   20588 main.go:141] libmachine: (addons-323324) DBG | <domain type='kvm'>
	I1013 21:18:01.373616   20588 main.go:141] libmachine: (addons-323324) DBG |   <name>addons-323324</name>
	I1013 21:18:01.373624   20588 main.go:141] libmachine: (addons-323324) DBG |   <uuid>0b14f569-4ea2-4973-9e33-46b9b045b1c5</uuid>
	I1013 21:18:01.373632   20588 main.go:141] libmachine: (addons-323324) DBG |   <memory unit='KiB'>4194304</memory>
	I1013 21:18:01.373640   20588 main.go:141] libmachine: (addons-323324) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I1013 21:18:01.373648   20588 main.go:141] libmachine: (addons-323324) DBG |   <vcpu placement='static'>2</vcpu>
	I1013 21:18:01.373659   20588 main.go:141] libmachine: (addons-323324) DBG |   <os>
	I1013 21:18:01.373690   20588 main.go:141] libmachine: (addons-323324) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1013 21:18:01.373706   20588 main.go:141] libmachine: (addons-323324) DBG |     <boot dev='cdrom'/>
	I1013 21:18:01.373714   20588 main.go:141] libmachine: (addons-323324) DBG |     <boot dev='hd'/>
	I1013 21:18:01.373718   20588 main.go:141] libmachine: (addons-323324) DBG |     <bootmenu enable='no'/>
	I1013 21:18:01.373723   20588 main.go:141] libmachine: (addons-323324) DBG |   </os>
	I1013 21:18:01.373727   20588 main.go:141] libmachine: (addons-323324) DBG |   <features>
	I1013 21:18:01.373732   20588 main.go:141] libmachine: (addons-323324) DBG |     <acpi/>
	I1013 21:18:01.373738   20588 main.go:141] libmachine: (addons-323324) DBG |     <apic/>
	I1013 21:18:01.373743   20588 main.go:141] libmachine: (addons-323324) DBG |     <pae/>
	I1013 21:18:01.373750   20588 main.go:141] libmachine: (addons-323324) DBG |   </features>
	I1013 21:18:01.373756   20588 main.go:141] libmachine: (addons-323324) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1013 21:18:01.373760   20588 main.go:141] libmachine: (addons-323324) DBG |   <clock offset='utc'/>
	I1013 21:18:01.373765   20588 main.go:141] libmachine: (addons-323324) DBG |   <on_poweroff>destroy</on_poweroff>
	I1013 21:18:01.373771   20588 main.go:141] libmachine: (addons-323324) DBG |   <on_reboot>restart</on_reboot>
	I1013 21:18:01.373776   20588 main.go:141] libmachine: (addons-323324) DBG |   <on_crash>destroy</on_crash>
	I1013 21:18:01.373780   20588 main.go:141] libmachine: (addons-323324) DBG |   <devices>
	I1013 21:18:01.373818   20588 main.go:141] libmachine: (addons-323324) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1013 21:18:01.373841   20588 main.go:141] libmachine: (addons-323324) DBG |     <disk type='file' device='cdrom'>
	I1013 21:18:01.373853   20588 main.go:141] libmachine: (addons-323324) DBG |       <driver name='qemu' type='raw'/>
	I1013 21:18:01.373862   20588 main.go:141] libmachine: (addons-323324) DBG |       <source file='/home/jenkins/minikube-integration/21724-15625/.minikube/machines/addons-323324/boot2docker.iso'/>
	I1013 21:18:01.373871   20588 main.go:141] libmachine: (addons-323324) DBG |       <target dev='hdc' bus='scsi'/>
	I1013 21:18:01.373881   20588 main.go:141] libmachine: (addons-323324) DBG |       <readonly/>
	I1013 21:18:01.373893   20588 main.go:141] libmachine: (addons-323324) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1013 21:18:01.373910   20588 main.go:141] libmachine: (addons-323324) DBG |     </disk>
	I1013 21:18:01.373922   20588 main.go:141] libmachine: (addons-323324) DBG |     <disk type='file' device='disk'>
	I1013 21:18:01.373935   20588 main.go:141] libmachine: (addons-323324) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1013 21:18:01.373949   20588 main.go:141] libmachine: (addons-323324) DBG |       <source file='/home/jenkins/minikube-integration/21724-15625/.minikube/machines/addons-323324/addons-323324.rawdisk'/>
	I1013 21:18:01.373960   20588 main.go:141] libmachine: (addons-323324) DBG |       <target dev='hda' bus='virtio'/>
	I1013 21:18:01.373973   20588 main.go:141] libmachine: (addons-323324) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1013 21:18:01.373985   20588 main.go:141] libmachine: (addons-323324) DBG |     </disk>
	I1013 21:18:01.373997   20588 main.go:141] libmachine: (addons-323324) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1013 21:18:01.374011   20588 main.go:141] libmachine: (addons-323324) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1013 21:18:01.374021   20588 main.go:141] libmachine: (addons-323324) DBG |     </controller>
	I1013 21:18:01.374036   20588 main.go:141] libmachine: (addons-323324) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1013 21:18:01.374046   20588 main.go:141] libmachine: (addons-323324) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1013 21:18:01.374053   20588 main.go:141] libmachine: (addons-323324) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1013 21:18:01.374060   20588 main.go:141] libmachine: (addons-323324) DBG |     </controller>
	I1013 21:18:01.374065   20588 main.go:141] libmachine: (addons-323324) DBG |     <interface type='network'>
	I1013 21:18:01.374070   20588 main.go:141] libmachine: (addons-323324) DBG |       <mac address='52:54:00:28:03:23'/>
	I1013 21:18:01.374075   20588 main.go:141] libmachine: (addons-323324) DBG |       <source network='mk-addons-323324'/>
	I1013 21:18:01.374080   20588 main.go:141] libmachine: (addons-323324) DBG |       <model type='virtio'/>
	I1013 21:18:01.374091   20588 main.go:141] libmachine: (addons-323324) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1013 21:18:01.374095   20588 main.go:141] libmachine: (addons-323324) DBG |     </interface>
	I1013 21:18:01.374100   20588 main.go:141] libmachine: (addons-323324) DBG |     <interface type='network'>
	I1013 21:18:01.374105   20588 main.go:141] libmachine: (addons-323324) DBG |       <mac address='52:54:00:e2:70:c0'/>
	I1013 21:18:01.374110   20588 main.go:141] libmachine: (addons-323324) DBG |       <source network='default'/>
	I1013 21:18:01.374115   20588 main.go:141] libmachine: (addons-323324) DBG |       <model type='virtio'/>
	I1013 21:18:01.374137   20588 main.go:141] libmachine: (addons-323324) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1013 21:18:01.374167   20588 main.go:141] libmachine: (addons-323324) DBG |     </interface>
	I1013 21:18:01.374183   20588 main.go:141] libmachine: (addons-323324) DBG |     <serial type='pty'>
	I1013 21:18:01.374199   20588 main.go:141] libmachine: (addons-323324) DBG |       <target type='isa-serial' port='0'>
	I1013 21:18:01.374214   20588 main.go:141] libmachine: (addons-323324) DBG |         <model name='isa-serial'/>
	I1013 21:18:01.374223   20588 main.go:141] libmachine: (addons-323324) DBG |       </target>
	I1013 21:18:01.374231   20588 main.go:141] libmachine: (addons-323324) DBG |     </serial>
	I1013 21:18:01.374239   20588 main.go:141] libmachine: (addons-323324) DBG |     <console type='pty'>
	I1013 21:18:01.374247   20588 main.go:141] libmachine: (addons-323324) DBG |       <target type='serial' port='0'/>
	I1013 21:18:01.374258   20588 main.go:141] libmachine: (addons-323324) DBG |     </console>
	I1013 21:18:01.374270   20588 main.go:141] libmachine: (addons-323324) DBG |     <input type='mouse' bus='ps2'/>
	I1013 21:18:01.374293   20588 main.go:141] libmachine: (addons-323324) DBG |     <input type='keyboard' bus='ps2'/>
	I1013 21:18:01.374306   20588 main.go:141] libmachine: (addons-323324) DBG |     <audio id='1' type='none'/>
	I1013 21:18:01.374316   20588 main.go:141] libmachine: (addons-323324) DBG |     <memballoon model='virtio'>
	I1013 21:18:01.374330   20588 main.go:141] libmachine: (addons-323324) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1013 21:18:01.374339   20588 main.go:141] libmachine: (addons-323324) DBG |     </memballoon>
	I1013 21:18:01.374349   20588 main.go:141] libmachine: (addons-323324) DBG |     <rng model='virtio'>
	I1013 21:18:01.374370   20588 main.go:141] libmachine: (addons-323324) DBG |       <backend model='random'>/dev/random</backend>
	I1013 21:18:01.374385   20588 main.go:141] libmachine: (addons-323324) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1013 21:18:01.374396   20588 main.go:141] libmachine: (addons-323324) DBG |     </rng>
	I1013 21:18:01.374407   20588 main.go:141] libmachine: (addons-323324) DBG |   </devices>
	I1013 21:18:01.374416   20588 main.go:141] libmachine: (addons-323324) DBG | </domain>
	I1013 21:18:01.374439   20588 main.go:141] libmachine: (addons-323324) DBG | 
	I1013 21:18:02.664438   20588 main.go:141] libmachine: (addons-323324) waiting for domain to start...
	I1013 21:18:02.665817   20588 main.go:141] libmachine: (addons-323324) domain is now running
	I1013 21:18:02.665856   20588 main.go:141] libmachine: (addons-323324) waiting for IP...
	I1013 21:18:02.666595   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:02.667116   20588 main.go:141] libmachine: (addons-323324) DBG | no network interface addresses found for domain addons-323324 (source=lease)
	I1013 21:18:02.667142   20588 main.go:141] libmachine: (addons-323324) DBG | trying to list again with source=arp
	I1013 21:18:02.667407   20588 main.go:141] libmachine: (addons-323324) DBG | unable to find current IP address of domain addons-323324 in network mk-addons-323324 (interfaces detected: [])
	I1013 21:18:02.667469   20588 main.go:141] libmachine: (addons-323324) DBG | I1013 21:18:02.667410   20616 retry.go:31] will retry after 293.023095ms: waiting for domain to come up
	I1013 21:18:02.962326   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:02.962783   20588 main.go:141] libmachine: (addons-323324) DBG | no network interface addresses found for domain addons-323324 (source=lease)
	I1013 21:18:02.962806   20588 main.go:141] libmachine: (addons-323324) DBG | trying to list again with source=arp
	I1013 21:18:02.963074   20588 main.go:141] libmachine: (addons-323324) DBG | unable to find current IP address of domain addons-323324 in network mk-addons-323324 (interfaces detected: [])
	I1013 21:18:02.963107   20588 main.go:141] libmachine: (addons-323324) DBG | I1013 21:18:02.963024   20616 retry.go:31] will retry after 301.548256ms: waiting for domain to come up
	I1013 21:18:03.266606   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:03.267098   20588 main.go:141] libmachine: (addons-323324) DBG | no network interface addresses found for domain addons-323324 (source=lease)
	I1013 21:18:03.267123   20588 main.go:141] libmachine: (addons-323324) DBG | trying to list again with source=arp
	I1013 21:18:03.267454   20588 main.go:141] libmachine: (addons-323324) DBG | unable to find current IP address of domain addons-323324 in network mk-addons-323324 (interfaces detected: [])
	I1013 21:18:03.267480   20588 main.go:141] libmachine: (addons-323324) DBG | I1013 21:18:03.267389   20616 retry.go:31] will retry after 395.863743ms: waiting for domain to come up
	I1013 21:18:03.665113   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:03.665639   20588 main.go:141] libmachine: (addons-323324) DBG | no network interface addresses found for domain addons-323324 (source=lease)
	I1013 21:18:03.665672   20588 main.go:141] libmachine: (addons-323324) DBG | trying to list again with source=arp
	I1013 21:18:03.665909   20588 main.go:141] libmachine: (addons-323324) DBG | unable to find current IP address of domain addons-323324 in network mk-addons-323324 (interfaces detected: [])
	I1013 21:18:03.665930   20588 main.go:141] libmachine: (addons-323324) DBG | I1013 21:18:03.665889   20616 retry.go:31] will retry after 423.454517ms: waiting for domain to come up
	I1013 21:18:04.091443   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:04.091991   20588 main.go:141] libmachine: (addons-323324) DBG | no network interface addresses found for domain addons-323324 (source=lease)
	I1013 21:18:04.092021   20588 main.go:141] libmachine: (addons-323324) DBG | trying to list again with source=arp
	I1013 21:18:04.092361   20588 main.go:141] libmachine: (addons-323324) DBG | unable to find current IP address of domain addons-323324 in network mk-addons-323324 (interfaces detected: [])
	I1013 21:18:04.092384   20588 main.go:141] libmachine: (addons-323324) DBG | I1013 21:18:04.092335   20616 retry.go:31] will retry after 690.024335ms: waiting for domain to come up
	I1013 21:18:04.784324   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:04.784726   20588 main.go:141] libmachine: (addons-323324) DBG | no network interface addresses found for domain addons-323324 (source=lease)
	I1013 21:18:04.784754   20588 main.go:141] libmachine: (addons-323324) DBG | trying to list again with source=arp
	I1013 21:18:04.784978   20588 main.go:141] libmachine: (addons-323324) DBG | unable to find current IP address of domain addons-323324 in network mk-addons-323324 (interfaces detected: [])
	I1013 21:18:04.785024   20588 main.go:141] libmachine: (addons-323324) DBG | I1013 21:18:04.784954   20616 retry.go:31] will retry after 809.660144ms: waiting for domain to come up
	I1013 21:18:05.596764   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:05.597263   20588 main.go:141] libmachine: (addons-323324) DBG | no network interface addresses found for domain addons-323324 (source=lease)
	I1013 21:18:05.597296   20588 main.go:141] libmachine: (addons-323324) DBG | trying to list again with source=arp
	I1013 21:18:05.597521   20588 main.go:141] libmachine: (addons-323324) DBG | unable to find current IP address of domain addons-323324 in network mk-addons-323324 (interfaces detected: [])
	I1013 21:18:05.597547   20588 main.go:141] libmachine: (addons-323324) DBG | I1013 21:18:05.597493   20616 retry.go:31] will retry after 948.072398ms: waiting for domain to come up
	I1013 21:18:06.547487   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:06.548097   20588 main.go:141] libmachine: (addons-323324) DBG | no network interface addresses found for domain addons-323324 (source=lease)
	I1013 21:18:06.548119   20588 main.go:141] libmachine: (addons-323324) DBG | trying to list again with source=arp
	I1013 21:18:06.548469   20588 main.go:141] libmachine: (addons-323324) DBG | unable to find current IP address of domain addons-323324 in network mk-addons-323324 (interfaces detected: [])
	I1013 21:18:06.548498   20588 main.go:141] libmachine: (addons-323324) DBG | I1013 21:18:06.548416   20616 retry.go:31] will retry after 942.490177ms: waiting for domain to come up
	I1013 21:18:07.492647   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:07.493107   20588 main.go:141] libmachine: (addons-323324) DBG | no network interface addresses found for domain addons-323324 (source=lease)
	I1013 21:18:07.493132   20588 main.go:141] libmachine: (addons-323324) DBG | trying to list again with source=arp
	I1013 21:18:07.493428   20588 main.go:141] libmachine: (addons-323324) DBG | unable to find current IP address of domain addons-323324 in network mk-addons-323324 (interfaces detected: [])
	I1013 21:18:07.493448   20588 main.go:141] libmachine: (addons-323324) DBG | I1013 21:18:07.493398   20616 retry.go:31] will retry after 1.754619509s: waiting for domain to come up
	I1013 21:18:09.250361   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:09.250984   20588 main.go:141] libmachine: (addons-323324) DBG | no network interface addresses found for domain addons-323324 (source=lease)
	I1013 21:18:09.251015   20588 main.go:141] libmachine: (addons-323324) DBG | trying to list again with source=arp
	I1013 21:18:09.251288   20588 main.go:141] libmachine: (addons-323324) DBG | unable to find current IP address of domain addons-323324 in network mk-addons-323324 (interfaces detected: [])
	I1013 21:18:09.251328   20588 main.go:141] libmachine: (addons-323324) DBG | I1013 21:18:09.251276   20616 retry.go:31] will retry after 1.843542159s: waiting for domain to come up
	I1013 21:18:11.097219   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:11.097958   20588 main.go:141] libmachine: (addons-323324) DBG | no network interface addresses found for domain addons-323324 (source=lease)
	I1013 21:18:11.097990   20588 main.go:141] libmachine: (addons-323324) DBG | trying to list again with source=arp
	I1013 21:18:11.098314   20588 main.go:141] libmachine: (addons-323324) DBG | unable to find current IP address of domain addons-323324 in network mk-addons-323324 (interfaces detected: [])
	I1013 21:18:11.098371   20588 main.go:141] libmachine: (addons-323324) DBG | I1013 21:18:11.098296   20616 retry.go:31] will retry after 2.449495196s: waiting for domain to come up
	I1013 21:18:13.551133   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:13.551673   20588 main.go:141] libmachine: (addons-323324) DBG | no network interface addresses found for domain addons-323324 (source=lease)
	I1013 21:18:13.551695   20588 main.go:141] libmachine: (addons-323324) DBG | trying to list again with source=arp
	I1013 21:18:13.551941   20588 main.go:141] libmachine: (addons-323324) DBG | unable to find current IP address of domain addons-323324 in network mk-addons-323324 (interfaces detected: [])
	I1013 21:18:13.551973   20588 main.go:141] libmachine: (addons-323324) DBG | I1013 21:18:13.551936   20616 retry.go:31] will retry after 3.450100206s: waiting for domain to come up
	I1013 21:18:17.005233   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:17.005750   20588 main.go:141] libmachine: (addons-323324) DBG | no network interface addresses found for domain addons-323324 (source=lease)
	I1013 21:18:17.005777   20588 main.go:141] libmachine: (addons-323324) DBG | trying to list again with source=arp
	I1013 21:18:17.006053   20588 main.go:141] libmachine: (addons-323324) DBG | unable to find current IP address of domain addons-323324 in network mk-addons-323324 (interfaces detected: [])
	I1013 21:18:17.006087   20588 main.go:141] libmachine: (addons-323324) DBG | I1013 21:18:17.006043   20616 retry.go:31] will retry after 3.46958384s: waiting for domain to come up
	I1013 21:18:20.477622   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:20.478240   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has current primary IP address 192.168.39.156 and MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:20.478259   20588 main.go:141] libmachine: (addons-323324) found domain IP: 192.168.39.156
	I1013 21:18:20.478268   20588 main.go:141] libmachine: (addons-323324) reserving static IP address...
	I1013 21:18:20.478762   20588 main.go:141] libmachine: (addons-323324) DBG | unable to find host DHCP lease matching {name: "addons-323324", mac: "52:54:00:28:03:23", ip: "192.168.39.156"} in network mk-addons-323324
	I1013 21:18:20.671688   20588 main.go:141] libmachine: (addons-323324) DBG | Getting to WaitForSSH function...
	I1013 21:18:20.671723   20588 main.go:141] libmachine: (addons-323324) reserved static IP address 192.168.39.156 for domain addons-323324
	I1013 21:18:20.671748   20588 main.go:141] libmachine: (addons-323324) waiting for SSH...
	I1013 21:18:20.674972   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:20.675522   20588 main.go:141] libmachine: (addons-323324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:03:23", ip: ""} in network mk-addons-323324: {Iface:virbr1 ExpiryTime:2025-10-13 22:18:17 +0000 UTC Type:0 Mac:52:54:00:28:03:23 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:minikube Clientid:01:52:54:00:28:03:23}
	I1013 21:18:20.675558   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined IP address 192.168.39.156 and MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:20.675812   20588 main.go:141] libmachine: (addons-323324) DBG | Using SSH client type: external
	I1013 21:18:20.675841   20588 main.go:141] libmachine: (addons-323324) DBG | Using SSH private key: /home/jenkins/minikube-integration/21724-15625/.minikube/machines/addons-323324/id_rsa (-rw-------)
	I1013 21:18:20.675874   20588 main.go:141] libmachine: (addons-323324) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.156 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21724-15625/.minikube/machines/addons-323324/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1013 21:18:20.675890   20588 main.go:141] libmachine: (addons-323324) DBG | About to run SSH command:
	I1013 21:18:20.675925   20588 main.go:141] libmachine: (addons-323324) DBG | exit 0
	I1013 21:18:20.811282   20588 main.go:141] libmachine: (addons-323324) DBG | SSH cmd err, output: <nil>: 
	I1013 21:18:20.811598   20588 main.go:141] libmachine: (addons-323324) domain creation complete
	I1013 21:18:20.811916   20588 main.go:141] libmachine: (addons-323324) Calling .GetConfigRaw
	I1013 21:18:20.812506   20588 main.go:141] libmachine: (addons-323324) Calling .DriverName
	I1013 21:18:20.812724   20588 main.go:141] libmachine: (addons-323324) Calling .DriverName
	I1013 21:18:20.812876   20588 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1013 21:18:20.812890   20588 main.go:141] libmachine: (addons-323324) Calling .GetState
	I1013 21:18:20.814265   20588 main.go:141] libmachine: Detecting operating system of created instance...
	I1013 21:18:20.814278   20588 main.go:141] libmachine: Waiting for SSH to be available...
	I1013 21:18:20.814283   20588 main.go:141] libmachine: Getting to WaitForSSH function...
	I1013 21:18:20.814288   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHHostname
	I1013 21:18:20.817017   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:20.817505   20588 main.go:141] libmachine: (addons-323324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:03:23", ip: ""} in network mk-addons-323324: {Iface:virbr1 ExpiryTime:2025-10-13 22:18:17 +0000 UTC Type:0 Mac:52:54:00:28:03:23 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-323324 Clientid:01:52:54:00:28:03:23}
	I1013 21:18:20.817538   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined IP address 192.168.39.156 and MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:20.817689   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHPort
	I1013 21:18:20.817902   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHKeyPath
	I1013 21:18:20.818083   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHKeyPath
	I1013 21:18:20.818196   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHUsername
	I1013 21:18:20.818333   20588 main.go:141] libmachine: Using SSH client type: native
	I1013 21:18:20.818592   20588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I1013 21:18:20.818604   20588 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1013 21:18:20.923142   20588 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 21:18:20.923183   20588 main.go:141] libmachine: Detecting the provisioner...
	I1013 21:18:20.923195   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHHostname
	I1013 21:18:20.926226   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:20.926583   20588 main.go:141] libmachine: (addons-323324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:03:23", ip: ""} in network mk-addons-323324: {Iface:virbr1 ExpiryTime:2025-10-13 22:18:17 +0000 UTC Type:0 Mac:52:54:00:28:03:23 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-323324 Clientid:01:52:54:00:28:03:23}
	I1013 21:18:20.926611   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined IP address 192.168.39.156 and MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:20.926804   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHPort
	I1013 21:18:20.926990   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHKeyPath
	I1013 21:18:20.927181   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHKeyPath
	I1013 21:18:20.927366   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHUsername
	I1013 21:18:20.927551   20588 main.go:141] libmachine: Using SSH client type: native
	I1013 21:18:20.927812   20588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I1013 21:18:20.927825   20588 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1013 21:18:21.035410   20588 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1013 21:18:21.035478   20588 main.go:141] libmachine: found compatible host: buildroot
	I1013 21:18:21.035491   20588 main.go:141] libmachine: Provisioning with buildroot...
	I1013 21:18:21.035502   20588 main.go:141] libmachine: (addons-323324) Calling .GetMachineName
	I1013 21:18:21.035751   20588 buildroot.go:166] provisioning hostname "addons-323324"
	I1013 21:18:21.035779   20588 main.go:141] libmachine: (addons-323324) Calling .GetMachineName
	I1013 21:18:21.035958   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHHostname
	I1013 21:18:21.038981   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:21.039460   20588 main.go:141] libmachine: (addons-323324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:03:23", ip: ""} in network mk-addons-323324: {Iface:virbr1 ExpiryTime:2025-10-13 22:18:17 +0000 UTC Type:0 Mac:52:54:00:28:03:23 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-323324 Clientid:01:52:54:00:28:03:23}
	I1013 21:18:21.039492   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined IP address 192.168.39.156 and MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:21.039607   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHPort
	I1013 21:18:21.039823   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHKeyPath
	I1013 21:18:21.039995   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHKeyPath
	I1013 21:18:21.040230   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHUsername
	I1013 21:18:21.040435   20588 main.go:141] libmachine: Using SSH client type: native
	I1013 21:18:21.040712   20588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I1013 21:18:21.040734   20588 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-323324 && echo "addons-323324" | sudo tee /etc/hostname
	I1013 21:18:21.163137   20588 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-323324
	
	I1013 21:18:21.163183   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHHostname
	I1013 21:18:21.166048   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:21.166453   20588 main.go:141] libmachine: (addons-323324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:03:23", ip: ""} in network mk-addons-323324: {Iface:virbr1 ExpiryTime:2025-10-13 22:18:17 +0000 UTC Type:0 Mac:52:54:00:28:03:23 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-323324 Clientid:01:52:54:00:28:03:23}
	I1013 21:18:21.166482   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined IP address 192.168.39.156 and MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:21.166634   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHPort
	I1013 21:18:21.166827   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHKeyPath
	I1013 21:18:21.167064   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHKeyPath
	I1013 21:18:21.167230   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHUsername
	I1013 21:18:21.167418   20588 main.go:141] libmachine: Using SSH client type: native
	I1013 21:18:21.167600   20588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I1013 21:18:21.167617   20588 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-323324' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-323324/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-323324' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 21:18:21.281999   20588 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 21:18:21.282034   20588 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21724-15625/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-15625/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-15625/.minikube}
	I1013 21:18:21.282051   20588 buildroot.go:174] setting up certificates
	I1013 21:18:21.282062   20588 provision.go:84] configureAuth start
	I1013 21:18:21.282069   20588 main.go:141] libmachine: (addons-323324) Calling .GetMachineName
	I1013 21:18:21.282369   20588 main.go:141] libmachine: (addons-323324) Calling .GetIP
	I1013 21:18:21.285827   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:21.286263   20588 main.go:141] libmachine: (addons-323324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:03:23", ip: ""} in network mk-addons-323324: {Iface:virbr1 ExpiryTime:2025-10-13 22:18:17 +0000 UTC Type:0 Mac:52:54:00:28:03:23 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-323324 Clientid:01:52:54:00:28:03:23}
	I1013 21:18:21.286286   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined IP address 192.168.39.156 and MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:21.286506   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHHostname
	I1013 21:18:21.289255   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:21.289683   20588 main.go:141] libmachine: (addons-323324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:03:23", ip: ""} in network mk-addons-323324: {Iface:virbr1 ExpiryTime:2025-10-13 22:18:17 +0000 UTC Type:0 Mac:52:54:00:28:03:23 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-323324 Clientid:01:52:54:00:28:03:23}
	I1013 21:18:21.289704   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined IP address 192.168.39.156 and MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:21.289862   20588 provision.go:143] copyHostCerts
	I1013 21:18:21.289930   20588 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-15625/.minikube/ca.pem (1078 bytes)
	I1013 21:18:21.290058   20588 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-15625/.minikube/cert.pem (1123 bytes)
	I1013 21:18:21.290131   20588 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-15625/.minikube/key.pem (1675 bytes)
	I1013 21:18:21.290267   20588 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-15625/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca-key.pem org=jenkins.addons-323324 san=[127.0.0.1 192.168.39.156 addons-323324 localhost minikube]
	I1013 21:18:21.487472   20588 provision.go:177] copyRemoteCerts
	I1013 21:18:21.487529   20588 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 21:18:21.487550   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHHostname
	I1013 21:18:21.490540   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:21.490926   20588 main.go:141] libmachine: (addons-323324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:03:23", ip: ""} in network mk-addons-323324: {Iface:virbr1 ExpiryTime:2025-10-13 22:18:17 +0000 UTC Type:0 Mac:52:54:00:28:03:23 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-323324 Clientid:01:52:54:00:28:03:23}
	I1013 21:18:21.490956   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined IP address 192.168.39.156 and MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:21.491252   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHPort
	I1013 21:18:21.491443   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHKeyPath
	I1013 21:18:21.491604   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHUsername
	I1013 21:18:21.491761   20588 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/addons-323324/id_rsa Username:docker}
	I1013 21:18:21.585214   20588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1013 21:18:21.619090   20588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1013 21:18:21.652390   20588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1013 21:18:21.686462   20588 provision.go:87] duration metric: took 404.388206ms to configureAuth
	I1013 21:18:21.686489   20588 buildroot.go:189] setting minikube options for container-runtime
	I1013 21:18:21.686770   20588 config.go:182] Loaded profile config "addons-323324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:18:21.686922   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHHostname
	I1013 21:18:21.690106   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:21.690554   20588 main.go:141] libmachine: (addons-323324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:03:23", ip: ""} in network mk-addons-323324: {Iface:virbr1 ExpiryTime:2025-10-13 22:18:17 +0000 UTC Type:0 Mac:52:54:00:28:03:23 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-323324 Clientid:01:52:54:00:28:03:23}
	I1013 21:18:21.690584   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined IP address 192.168.39.156 and MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:21.690806   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHPort
	I1013 21:18:21.691019   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHKeyPath
	I1013 21:18:21.691234   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHKeyPath
	I1013 21:18:21.691420   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHUsername
	I1013 21:18:21.691605   20588 main.go:141] libmachine: Using SSH client type: native
	I1013 21:18:21.691820   20588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I1013 21:18:21.691835   20588 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 21:18:21.938832   20588 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 21:18:21.938860   20588 main.go:141] libmachine: Checking connection to Docker...
	I1013 21:18:21.938869   20588 main.go:141] libmachine: (addons-323324) Calling .GetURL
	I1013 21:18:21.940243   20588 main.go:141] libmachine: (addons-323324) DBG | using libvirt version 8000000
	I1013 21:18:21.942839   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:21.943183   20588 main.go:141] libmachine: (addons-323324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:03:23", ip: ""} in network mk-addons-323324: {Iface:virbr1 ExpiryTime:2025-10-13 22:18:17 +0000 UTC Type:0 Mac:52:54:00:28:03:23 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-323324 Clientid:01:52:54:00:28:03:23}
	I1013 21:18:21.943215   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined IP address 192.168.39.156 and MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:21.943344   20588 main.go:141] libmachine: Docker is up and running!
	I1013 21:18:21.943358   20588 main.go:141] libmachine: Reticulating splines...
	I1013 21:18:21.943366   20588 client.go:171] duration metric: took 21.674435616s to LocalClient.Create
	I1013 21:18:21.943396   20588 start.go:167] duration metric: took 21.674490719s to libmachine.API.Create "addons-323324"
	I1013 21:18:21.943408   20588 start.go:293] postStartSetup for "addons-323324" (driver="kvm2")
	I1013 21:18:21.943421   20588 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 21:18:21.943441   20588 main.go:141] libmachine: (addons-323324) Calling .DriverName
	I1013 21:18:21.943698   20588 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 21:18:21.943728   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHHostname
	I1013 21:18:21.945928   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:21.946291   20588 main.go:141] libmachine: (addons-323324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:03:23", ip: ""} in network mk-addons-323324: {Iface:virbr1 ExpiryTime:2025-10-13 22:18:17 +0000 UTC Type:0 Mac:52:54:00:28:03:23 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-323324 Clientid:01:52:54:00:28:03:23}
	I1013 21:18:21.946313   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined IP address 192.168.39.156 and MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:21.946492   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHPort
	I1013 21:18:21.946674   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHKeyPath
	I1013 21:18:21.946861   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHUsername
	I1013 21:18:21.947078   20588 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/addons-323324/id_rsa Username:docker}
	I1013 21:18:22.030090   20588 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 21:18:22.035393   20588 info.go:137] Remote host: Buildroot 2025.02
	I1013 21:18:22.035416   20588 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-15625/.minikube/addons for local assets ...
	I1013 21:18:22.035486   20588 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-15625/.minikube/files for local assets ...
	I1013 21:18:22.035521   20588 start.go:296] duration metric: took 92.105291ms for postStartSetup
	I1013 21:18:22.035557   20588 main.go:141] libmachine: (addons-323324) Calling .GetConfigRaw
	I1013 21:18:22.036243   20588 main.go:141] libmachine: (addons-323324) Calling .GetIP
	I1013 21:18:22.038942   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:22.039345   20588 main.go:141] libmachine: (addons-323324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:03:23", ip: ""} in network mk-addons-323324: {Iface:virbr1 ExpiryTime:2025-10-13 22:18:17 +0000 UTC Type:0 Mac:52:54:00:28:03:23 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-323324 Clientid:01:52:54:00:28:03:23}
	I1013 21:18:22.039366   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined IP address 192.168.39.156 and MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:22.039663   20588 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/config.json ...
	I1013 21:18:22.039879   20588 start.go:128] duration metric: took 21.788229414s to createHost
	I1013 21:18:22.039913   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHHostname
	I1013 21:18:22.042773   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:22.043290   20588 main.go:141] libmachine: (addons-323324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:03:23", ip: ""} in network mk-addons-323324: {Iface:virbr1 ExpiryTime:2025-10-13 22:18:17 +0000 UTC Type:0 Mac:52:54:00:28:03:23 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-323324 Clientid:01:52:54:00:28:03:23}
	I1013 21:18:22.043316   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined IP address 192.168.39.156 and MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:22.043482   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHPort
	I1013 21:18:22.043725   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHKeyPath
	I1013 21:18:22.043871   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHKeyPath
	I1013 21:18:22.044004   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHUsername
	I1013 21:18:22.044139   20588 main.go:141] libmachine: Using SSH client type: native
	I1013 21:18:22.044369   20588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I1013 21:18:22.044381   20588 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1013 21:18:22.151980   20588 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760390302.113699670
	
	I1013 21:18:22.152004   20588 fix.go:216] guest clock: 1760390302.113699670
	I1013 21:18:22.152013   20588 fix.go:229] Guest: 2025-10-13 21:18:22.11369967 +0000 UTC Remote: 2025-10-13 21:18:22.039900423 +0000 UTC m=+21.899569747 (delta=73.799247ms)
	I1013 21:18:22.152037   20588 fix.go:200] guest clock delta is within tolerance: 73.799247ms
	I1013 21:18:22.152044   20588 start.go:83] releasing machines lock for "addons-323324", held for 21.90046503s
	I1013 21:18:22.152069   20588 main.go:141] libmachine: (addons-323324) Calling .DriverName
	I1013 21:18:22.152367   20588 main.go:141] libmachine: (addons-323324) Calling .GetIP
	I1013 21:18:22.155484   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:22.155839   20588 main.go:141] libmachine: (addons-323324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:03:23", ip: ""} in network mk-addons-323324: {Iface:virbr1 ExpiryTime:2025-10-13 22:18:17 +0000 UTC Type:0 Mac:52:54:00:28:03:23 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-323324 Clientid:01:52:54:00:28:03:23}
	I1013 21:18:22.155870   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined IP address 192.168.39.156 and MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:22.156048   20588 main.go:141] libmachine: (addons-323324) Calling .DriverName
	I1013 21:18:22.156517   20588 main.go:141] libmachine: (addons-323324) Calling .DriverName
	I1013 21:18:22.156755   20588 main.go:141] libmachine: (addons-323324) Calling .DriverName
	I1013 21:18:22.156881   20588 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 21:18:22.156931   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHHostname
	I1013 21:18:22.156969   20588 ssh_runner.go:195] Run: cat /version.json
	I1013 21:18:22.156993   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHHostname
	I1013 21:18:22.160091   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:22.160269   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:22.160540   20588 main.go:141] libmachine: (addons-323324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:03:23", ip: ""} in network mk-addons-323324: {Iface:virbr1 ExpiryTime:2025-10-13 22:18:17 +0000 UTC Type:0 Mac:52:54:00:28:03:23 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-323324 Clientid:01:52:54:00:28:03:23}
	I1013 21:18:22.160566   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined IP address 192.168.39.156 and MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:22.160598   20588 main.go:141] libmachine: (addons-323324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:03:23", ip: ""} in network mk-addons-323324: {Iface:virbr1 ExpiryTime:2025-10-13 22:18:17 +0000 UTC Type:0 Mac:52:54:00:28:03:23 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-323324 Clientid:01:52:54:00:28:03:23}
	I1013 21:18:22.160615   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined IP address 192.168.39.156 and MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:22.160766   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHPort
	I1013 21:18:22.160998   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHKeyPath
	I1013 21:18:22.161014   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHPort
	I1013 21:18:22.161172   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHUsername
	I1013 21:18:22.161252   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHKeyPath
	I1013 21:18:22.161341   20588 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/addons-323324/id_rsa Username:docker}
	I1013 21:18:22.161462   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHUsername
	I1013 21:18:22.161793   20588 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/addons-323324/id_rsa Username:docker}
	I1013 21:18:22.264229   20588 ssh_runner.go:195] Run: systemctl --version
	I1013 21:18:22.270867   20588 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 21:18:22.428668   20588 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 21:18:22.437362   20588 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 21:18:22.437458   20588 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 21:18:22.461379   20588 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1013 21:18:22.461414   20588 start.go:495] detecting cgroup driver to use...
	I1013 21:18:22.461474   20588 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 21:18:22.488946   20588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 21:18:22.509227   20588 docker.go:218] disabling cri-docker service (if available) ...
	I1013 21:18:22.509283   20588 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 21:18:22.529338   20588 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 21:18:22.545721   20588 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 21:18:22.688961   20588 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 21:18:22.899740   20588 docker.go:234] disabling docker service ...
	I1013 21:18:22.899813   20588 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 21:18:22.917409   20588 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 21:18:22.933369   20588 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 21:18:23.096615   20588 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 21:18:23.242283   20588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 21:18:23.258493   20588 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 21:18:23.283143   20588 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 21:18:23.283208   20588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:18:23.297930   20588 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 21:18:23.297994   20588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:18:23.312867   20588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:18:23.327310   20588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:18:23.341351   20588 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 21:18:23.356103   20588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:18:23.370130   20588 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:18:23.391785   20588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:18:23.404980   20588 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 21:18:23.416189   20588 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1013 21:18:23.416239   20588 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1013 21:18:23.436556   20588 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 21:18:23.448885   20588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 21:18:23.590355   20588 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 21:18:23.710929   20588 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 21:18:23.711016   20588 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 21:18:23.716772   20588 start.go:563] Will wait 60s for crictl version
	I1013 21:18:23.716839   20588 ssh_runner.go:195] Run: which crictl
	I1013 21:18:23.721471   20588 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1013 21:18:23.763515   20588 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1013 21:18:23.763618   20588 ssh_runner.go:195] Run: crio --version
	I1013 21:18:23.794138   20588 ssh_runner.go:195] Run: crio --version
	I1013 21:18:23.825431   20588 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1013 21:18:23.826511   20588 main.go:141] libmachine: (addons-323324) Calling .GetIP
	I1013 21:18:23.829417   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:23.829843   20588 main.go:141] libmachine: (addons-323324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:03:23", ip: ""} in network mk-addons-323324: {Iface:virbr1 ExpiryTime:2025-10-13 22:18:17 +0000 UTC Type:0 Mac:52:54:00:28:03:23 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-323324 Clientid:01:52:54:00:28:03:23}
	I1013 21:18:23.829872   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined IP address 192.168.39.156 and MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:23.830095   20588 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1013 21:18:23.835250   20588 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 21:18:23.851397   20588 kubeadm.go:883] updating cluster {Name:addons-323324 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-323324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.156 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 21:18:23.851530   20588 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 21:18:23.851595   20588 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 21:18:23.890958   20588 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1013 21:18:23.891027   20588 ssh_runner.go:195] Run: which lz4
	I1013 21:18:23.895575   20588 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1013 21:18:23.901843   20588 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1013 21:18:23.901881   20588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1013 21:18:25.463537   20588 crio.go:462] duration metric: took 1.568001736s to copy over tarball
	I1013 21:18:25.463613   20588 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1013 21:18:27.193081   20588 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.729435122s)
	I1013 21:18:27.193116   20588 crio.go:469] duration metric: took 1.729552101s to extract the tarball
	I1013 21:18:27.193127   20588 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1013 21:18:27.235617   20588 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 21:18:27.282331   20588 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 21:18:27.282353   20588 cache_images.go:85] Images are preloaded, skipping loading
	I1013 21:18:27.282361   20588 kubeadm.go:934] updating node { 192.168.39.156 8443 v1.34.1 crio true true} ...
	I1013 21:18:27.282444   20588 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-323324 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.156
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-323324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 21:18:27.282505   20588 ssh_runner.go:195] Run: crio config
	I1013 21:18:27.332066   20588 cni.go:84] Creating CNI manager for ""
	I1013 21:18:27.332089   20588 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1013 21:18:27.332107   20588 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 21:18:27.332125   20588 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.156 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-323324 NodeName:addons-323324 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.156"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.156 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 21:18:27.332244   20588 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.156
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-323324"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.156"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.156"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 21:18:27.332313   20588 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 21:18:27.345918   20588 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 21:18:27.345993   20588 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 21:18:27.359172   20588 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1013 21:18:27.381875   20588 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 21:18:27.403704   20588 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1013 21:18:27.425887   20588 ssh_runner.go:195] Run: grep 192.168.39.156	control-plane.minikube.internal$ /etc/hosts
	I1013 21:18:27.430038   20588 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.156	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 21:18:27.444909   20588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 21:18:27.584802   20588 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 21:18:27.614061   20588 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324 for IP: 192.168.39.156
	I1013 21:18:27.614083   20588 certs.go:195] generating shared ca certs ...
	I1013 21:18:27.614100   20588 certs.go:227] acquiring lock for ca certs: {Name:mk571ab777fecb8fbabce6e1d2676cf4c099bd41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:18:27.614282   20588 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-15625/.minikube/ca.key
	I1013 21:18:27.870721   20588 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-15625/.minikube/ca.crt ...
	I1013 21:18:27.870752   20588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-15625/.minikube/ca.crt: {Name:mk3270283bc0d394f27480d3a781a05ec35228e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:18:27.870979   20588 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-15625/.minikube/ca.key ...
	I1013 21:18:27.870994   20588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-15625/.minikube/ca.key: {Name:mk7cf38cbd5459cbf10963baf29172caf0d98a50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:18:27.871101   20588 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-15625/.minikube/proxy-client-ca.key
	I1013 21:18:28.043729   20588 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-15625/.minikube/proxy-client-ca.crt ...
	I1013 21:18:28.043756   20588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-15625/.minikube/proxy-client-ca.crt: {Name:mk4bba109df35e3b93ae4414e0b7f0702751fbc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:18:28.043953   20588 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-15625/.minikube/proxy-client-ca.key ...
	I1013 21:18:28.043966   20588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-15625/.minikube/proxy-client-ca.key: {Name:mkee3fec1c75bdd845c4cf0f0f569c1f6c532196 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:18:28.044067   20588 certs.go:257] generating profile certs ...
	I1013 21:18:28.044121   20588 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.key
	I1013 21:18:28.044143   20588 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.crt with IP's: []
	I1013 21:18:28.326870   20588 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.crt ...
	I1013 21:18:28.326896   20588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.crt: {Name:mka284452e2b5f526d4b15219595e258332216ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:18:28.327078   20588 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.key ...
	I1013 21:18:28.327092   20588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.key: {Name:mkb468214ab8c2c727584e8b9d10ae7ce748e10a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:18:28.327282   20588 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/apiserver.key.e08efaa5
	I1013 21:18:28.327321   20588 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/apiserver.crt.e08efaa5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.156]
	I1013 21:18:28.729101   20588 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/apiserver.crt.e08efaa5 ...
	I1013 21:18:28.729136   20588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/apiserver.crt.e08efaa5: {Name:mk98f5df6350de05f7ee67b0877f71f73af4e63a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:18:28.729320   20588 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/apiserver.key.e08efaa5 ...
	I1013 21:18:28.729338   20588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/apiserver.key.e08efaa5: {Name:mkb3cfd7141bc751d3e620041dd052257c693874 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:18:28.729443   20588 certs.go:382] copying /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/apiserver.crt.e08efaa5 -> /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/apiserver.crt
	I1013 21:18:28.729537   20588 certs.go:386] copying /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/apiserver.key.e08efaa5 -> /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/apiserver.key
	I1013 21:18:28.729612   20588 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/proxy-client.key
	I1013 21:18:28.729637   20588 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/proxy-client.crt with IP's: []
	I1013 21:18:29.017320   20588 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/proxy-client.crt ...
	I1013 21:18:29.017351   20588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/proxy-client.crt: {Name:mk3955f8c3d35d36dff648f519073ce047cc514e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:18:29.017534   20588 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/proxy-client.key ...
	I1013 21:18:29.017551   20588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/proxy-client.key: {Name:mk38a55a5df258b33e1226f298a246d2ba0d523c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:18:29.017741   20588 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 21:18:29.017786   20588 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem (1078 bytes)
	I1013 21:18:29.017819   20588 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/cert.pem (1123 bytes)
	I1013 21:18:29.017851   20588 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/key.pem (1675 bytes)
	I1013 21:18:29.018513   20588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 21:18:29.052988   20588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 21:18:29.085697   20588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 21:18:29.118490   20588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 21:18:29.155415   20588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1013 21:18:29.198837   20588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 21:18:29.234095   20588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 21:18:29.273042   20588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 21:18:29.307632   20588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 21:18:29.353361   20588 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 21:18:29.376834   20588 ssh_runner.go:195] Run: openssl version
	I1013 21:18:29.384183   20588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 21:18:29.400118   20588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:18:29.406099   20588 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 21:18 /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:18:29.406176   20588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:18:29.414349   20588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 21:18:29.429959   20588 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 21:18:29.435360   20588 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 21:18:29.435410   20588 kubeadm.go:400] StartCluster: {Name:addons-323324 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-323324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.156 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:18:29.435469   20588 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:18:29.435510   20588 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:18:29.475679   20588 cri.go:89] found id: ""
	I1013 21:18:29.475739   20588 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 21:18:29.489201   20588 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 21:18:29.502405   20588 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 21:18:29.514985   20588 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 21:18:29.515033   20588 kubeadm.go:157] found existing configuration files:
	
	I1013 21:18:29.515083   20588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 21:18:29.526876   20588 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 21:18:29.526940   20588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 21:18:29.540130   20588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 21:18:29.552540   20588 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 21:18:29.552599   20588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 21:18:29.565941   20588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 21:18:29.578048   20588 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 21:18:29.578120   20588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 21:18:29.590940   20588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 21:18:29.602935   20588 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 21:18:29.602987   20588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 21:18:29.616043   20588 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1013 21:18:29.776517   20588 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 21:18:42.038058   20588 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 21:18:42.038132   20588 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 21:18:42.038268   20588 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 21:18:42.038395   20588 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 21:18:42.038519   20588 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 21:18:42.038586   20588 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 21:18:42.040084   20588 out.go:252]   - Generating certificates and keys ...
	I1013 21:18:42.040149   20588 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 21:18:42.040238   20588 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 21:18:42.040318   20588 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 21:18:42.040407   20588 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 21:18:42.040485   20588 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 21:18:42.040562   20588 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 21:18:42.040632   20588 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 21:18:42.040792   20588 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-323324 localhost] and IPs [192.168.39.156 127.0.0.1 ::1]
	I1013 21:18:42.040852   20588 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 21:18:42.040987   20588 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-323324 localhost] and IPs [192.168.39.156 127.0.0.1 ::1]
	I1013 21:18:42.041090   20588 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 21:18:42.041220   20588 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 21:18:42.041275   20588 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 21:18:42.041327   20588 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 21:18:42.041381   20588 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 21:18:42.041430   20588 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 21:18:42.041487   20588 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 21:18:42.041548   20588 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 21:18:42.041596   20588 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 21:18:42.041675   20588 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 21:18:42.041735   20588 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 21:18:42.042936   20588 out.go:252]   - Booting up control plane ...
	I1013 21:18:42.043033   20588 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 21:18:42.043104   20588 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 21:18:42.043196   20588 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 21:18:42.043284   20588 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 21:18:42.043370   20588 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 21:18:42.043462   20588 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 21:18:42.043538   20588 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 21:18:42.043588   20588 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 21:18:42.043698   20588 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 21:18:42.043802   20588 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 21:18:42.043854   20588 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501989636s
	I1013 21:18:42.043934   20588 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 21:18:42.044019   20588 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.156:8443/livez
	I1013 21:18:42.044105   20588 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 21:18:42.044222   20588 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 21:18:42.044301   20588 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.166513942s
	I1013 21:18:42.044369   20588 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.714678607s
	I1013 21:18:42.044434   20588 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.502393618s
	I1013 21:18:42.044520   20588 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 21:18:42.044622   20588 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 21:18:42.044672   20588 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 21:18:42.044859   20588 kubeadm.go:318] [mark-control-plane] Marking the node addons-323324 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 21:18:42.044923   20588 kubeadm.go:318] [bootstrap-token] Using token: sxl79q.od31yuf0xjhgmas6
	I1013 21:18:42.046923   20588 out.go:252]   - Configuring RBAC rules ...
	I1013 21:18:42.047014   20588 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1013 21:18:42.047086   20588 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1013 21:18:42.047236   20588 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1013 21:18:42.047351   20588 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1013 21:18:42.047464   20588 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1013 21:18:42.047551   20588 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1013 21:18:42.047665   20588 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1013 21:18:42.047710   20588 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1013 21:18:42.047756   20588 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1013 21:18:42.047766   20588 kubeadm.go:318] 
	I1013 21:18:42.047821   20588 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1013 21:18:42.047827   20588 kubeadm.go:318] 
	I1013 21:18:42.047895   20588 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1013 21:18:42.047903   20588 kubeadm.go:318] 
	I1013 21:18:42.047940   20588 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1013 21:18:42.048026   20588 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1013 21:18:42.048093   20588 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1013 21:18:42.048102   20588 kubeadm.go:318] 
	I1013 21:18:42.048172   20588 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1013 21:18:42.048185   20588 kubeadm.go:318] 
	I1013 21:18:42.048231   20588 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1013 21:18:42.048237   20588 kubeadm.go:318] 
	I1013 21:18:42.048277   20588 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1013 21:18:42.048337   20588 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1013 21:18:42.048391   20588 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1013 21:18:42.048397   20588 kubeadm.go:318] 
	I1013 21:18:42.048478   20588 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1013 21:18:42.048553   20588 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1013 21:18:42.048561   20588 kubeadm.go:318] 
	I1013 21:18:42.048640   20588 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token sxl79q.od31yuf0xjhgmas6 \
	I1013 21:18:42.048737   20588 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d396999fc962400d165f32f491fa5cab093d5e40df4f2ebb82ee782483cb7762 \
	I1013 21:18:42.048758   20588 kubeadm.go:318] 	--control-plane 
	I1013 21:18:42.048761   20588 kubeadm.go:318] 
	I1013 21:18:42.048837   20588 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1013 21:18:42.048844   20588 kubeadm.go:318] 
	I1013 21:18:42.048906   20588 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token sxl79q.od31yuf0xjhgmas6 \
	I1013 21:18:42.048998   20588 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d396999fc962400d165f32f491fa5cab093d5e40df4f2ebb82ee782483cb7762 
	I1013 21:18:42.049012   20588 cni.go:84] Creating CNI manager for ""
	I1013 21:18:42.049021   20588 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1013 21:18:42.050623   20588 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1013 21:18:42.051940   20588 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1013 21:18:42.066094   20588 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1013 21:18:42.088507   20588 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 21:18:42.088654   20588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-323324 minikube.k8s.io/updated_at=2025_10_13T21_18_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22 minikube.k8s.io/name=addons-323324 minikube.k8s.io/primary=true
	I1013 21:18:42.088659   20588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 21:18:42.155428   20588 ops.go:34] apiserver oom_adj: -16
	I1013 21:18:42.268432   20588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 21:18:42.768871   20588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 21:18:43.269532   20588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 21:18:43.769142   20588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 21:18:44.269379   20588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 21:18:44.769189   20588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 21:18:45.268532   20588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 21:18:45.768871   20588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 21:18:46.268699   20588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 21:18:46.364773   20588 kubeadm.go:1113] duration metric: took 4.276182418s to wait for elevateKubeSystemPrivileges
	I1013 21:18:46.364815   20588 kubeadm.go:402] duration metric: took 16.929407343s to StartCluster
	I1013 21:18:46.364838   20588 settings.go:142] acquiring lock: {Name:mk429dcebf497c5553c28c0bde1089c59d439da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:18:46.364981   20588 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-15625/kubeconfig
	I1013 21:18:46.365478   20588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-15625/kubeconfig: {Name:mkba5ceb9d6438ffa1375fb51eda64fa770df7b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:18:46.365728   20588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1013 21:18:46.365744   20588 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.156 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 21:18:46.365827   20588 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1013 21:18:46.365953   20588 addons.go:69] Setting yakd=true in profile "addons-323324"
	I1013 21:18:46.365988   20588 addons.go:238] Setting addon yakd=true in "addons-323324"
	I1013 21:18:46.365989   20588 addons.go:69] Setting inspektor-gadget=true in profile "addons-323324"
	I1013 21:18:46.366004   20588 addons.go:69] Setting ingress=true in profile "addons-323324"
	I1013 21:18:46.366022   20588 addons.go:238] Setting addon inspektor-gadget=true in "addons-323324"
	I1013 21:18:46.366029   20588 host.go:66] Checking if "addons-323324" exists ...
	I1013 21:18:46.366033   20588 addons.go:69] Setting gcp-auth=true in profile "addons-323324"
	I1013 21:18:46.366030   20588 addons.go:69] Setting default-storageclass=true in profile "addons-323324"
	I1013 21:18:46.366058   20588 addons.go:69] Setting ingress-dns=true in profile "addons-323324"
	I1013 21:18:46.366052   20588 addons.go:69] Setting cloud-spanner=true in profile "addons-323324"
	I1013 21:18:46.366062   20588 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-323324"
	I1013 21:18:46.366069   20588 addons.go:238] Setting addon ingress-dns=true in "addons-323324"
	I1013 21:18:46.366070   20588 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-323324"
	I1013 21:18:46.366075   20588 addons.go:238] Setting addon cloud-spanner=true in "addons-323324"
	I1013 21:18:46.366094   20588 host.go:66] Checking if "addons-323324" exists ...
	I1013 21:18:46.366025   20588 addons.go:238] Setting addon ingress=true in "addons-323324"
	I1013 21:18:46.366120   20588 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-323324"
	I1013 21:18:46.366126   20588 host.go:66] Checking if "addons-323324" exists ...
	I1013 21:18:46.366140   20588 host.go:66] Checking if "addons-323324" exists ...
	I1013 21:18:46.366486   20588 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-323324"
	I1013 21:18:46.366504   20588 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-323324"
	I1013 21:18:46.366525   20588 host.go:66] Checking if "addons-323324" exists ...
	I1013 21:18:46.366565   20588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:18:46.365989   20588 config.go:182] Loaded profile config "addons-323324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:18:46.366598   20588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:18:46.366601   20588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:18:46.366101   20588 host.go:66] Checking if "addons-323324" exists ...
	I1013 21:18:46.366622   20588 addons.go:69] Setting metrics-server=true in profile "addons-323324"
	I1013 21:18:46.366633   20588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:18:46.366635   20588 addons.go:238] Setting addon metrics-server=true in "addons-323324"
	I1013 21:18:46.366637   20588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:18:46.366653   20588 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-323324"
	I1013 21:18:46.366662   20588 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-323324"
	I1013 21:18:46.366669   20588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:18:46.366680   20588 host.go:66] Checking if "addons-323324" exists ...
	I1013 21:18:46.366601   20588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:18:46.366705   20588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:18:46.366047   20588 mustload.go:65] Loading cluster: addons-323324
	I1013 21:18:46.366062   20588 host.go:66] Checking if "addons-323324" exists ...
	I1013 21:18:46.366874   20588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:18:46.366896   20588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:18:46.366956   20588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:18:46.366962   20588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:18:46.366979   20588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:18:46.366988   20588 addons.go:69] Setting storage-provisioner=true in profile "addons-323324"
	I1013 21:18:46.366999   20588 addons.go:238] Setting addon storage-provisioner=true in "addons-323324"
	I1013 21:18:46.367021   20588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:18:46.367025   20588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:18:46.367052   20588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:18:46.367074   20588 addons.go:69] Setting registry-creds=true in profile "addons-323324"
	I1013 21:18:46.367086   20588 addons.go:238] Setting addon registry-creds=true in "addons-323324"
	I1013 21:18:46.367089   20588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:18:46.367101   20588 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-323324"
	I1013 21:18:46.367104   20588 addons.go:69] Setting registry=true in profile "addons-323324"
	I1013 21:18:46.367110   20588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:18:46.367112   20588 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-323324"
	I1013 21:18:46.367114   20588 addons.go:238] Setting addon registry=true in "addons-323324"
	I1013 21:18:46.367129   20588 addons.go:69] Setting volcano=true in profile "addons-323324"
	I1013 21:18:46.367136   20588 addons.go:69] Setting volumesnapshots=true in profile "addons-323324"
	I1013 21:18:46.367140   20588 addons.go:238] Setting addon volcano=true in "addons-323324"
	I1013 21:18:46.367145   20588 addons.go:238] Setting addon volumesnapshots=true in "addons-323324"
	I1013 21:18:46.367179   20588 host.go:66] Checking if "addons-323324" exists ...
	I1013 21:18:46.367183   20588 host.go:66] Checking if "addons-323324" exists ...
	I1013 21:18:46.367292   20588 host.go:66] Checking if "addons-323324" exists ...
	I1013 21:18:46.367526   20588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:18:46.367559   20588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:18:46.367716   20588 out.go:179] * Verifying Kubernetes components...
	I1013 21:18:46.367881   20588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:18:46.367903   20588 host.go:66] Checking if "addons-323324" exists ...
	I1013 21:18:46.367914   20588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:18:46.368367   20588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:18:46.368390   20588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:18:46.368397   20588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:18:46.368418   20588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:18:46.368531   20588 host.go:66] Checking if "addons-323324" exists ...
	I1013 21:18:46.368615   20588 host.go:66] Checking if "addons-323324" exists ...
	I1013 21:18:46.369169   20588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 21:18:46.372722   20588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:18:46.372760   20588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:18:46.372845   20588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:18:46.372878   20588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:18:46.373484   20588 config.go:182] Loaded profile config "addons-323324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:18:46.373811   20588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:18:46.373836   20588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:18:46.377689   20588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:18:46.377737   20588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:18:46.399009   20588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37677
	I1013 21:18:46.400739   20588 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:18:46.401472   20588 main.go:141] libmachine: Using API Version  1
	I1013 21:18:46.401505   20588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:18:46.402052   20588 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:18:46.402678   20588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:18:46.402728   20588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:18:46.402941   20588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42031
	I1013 21:18:46.406271   20588 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:18:46.406845   20588 main.go:141] libmachine: Using API Version  1
	I1013 21:18:46.406864   20588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:18:46.407249   20588 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:18:46.407874   20588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:18:46.408003   20588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:18:46.410805   20588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33727
	I1013 21:18:46.411421   20588 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:18:46.411941   20588 main.go:141] libmachine: Using API Version  1
	I1013 21:18:46.411959   20588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:18:46.412419   20588 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:18:46.413185   20588 main.go:141] libmachine: (addons-323324) Calling .GetState
	I1013 21:18:46.417261   20588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44417
	I1013 21:18:46.422784   20588 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:18:46.422943   20588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34333
	I1013 21:18:46.423124   20588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39019
	I1013 21:18:46.423277   20588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35285
	I1013 21:18:46.423387   20588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45143
	I1013 21:18:46.423765   20588 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:18:46.424124   20588 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:18:46.424417   20588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42791
	I1013 21:18:46.424446   20588 main.go:141] libmachine: Using API Version  1
	I1013 21:18:46.424464   20588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:18:46.424743   20588 addons.go:238] Setting addon default-storageclass=true in "addons-323324"
	I1013 21:18:46.424788   20588 host.go:66] Checking if "addons-323324" exists ...
	I1013 21:18:46.424874   20588 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:18:46.424899   20588 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:18:46.425182   20588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:18:46.425227   20588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:18:46.425549   20588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:18:46.425579   20588 main.go:141] libmachine: Using API Version  1
	I1013 21:18:46.425597   20588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:18:46.425676   20588 main.go:141] libmachine: Using API Version  1
	I1013 21:18:46.425692   20588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:18:46.426006   20588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:18:46.426087   20588 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:18:46.426254   20588 main.go:141] libmachine: Using API Version  1
	I1013 21:18:46.426268   20588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:18:46.426691   20588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:18:46.426724   20588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:18:46.426913   20588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36771
	I1013 21:18:46.427104   20588 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:18:46.427176   20588 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:18:46.427277   20588 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:18:46.427877   20588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:18:46.427911   20588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:18:46.428913   20588 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:18:46.429068   20588 main.go:141] libmachine: Using API Version  1
	I1013 21:18:46.429333   20588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:18:46.429240   20588 main.go:141] libmachine: Using API Version  1
	I1013 21:18:46.429365   20588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:18:46.430257   20588 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:18:46.430268   20588 main.go:141] libmachine: (addons-323324) Calling .GetState
	I1013 21:18:46.430779   20588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:18:46.430808   20588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:18:46.430844   20588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34987
	I1013 21:18:46.431467   20588 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:18:46.432208   20588 main.go:141] libmachine: Using API Version  1
	I1013 21:18:46.432228   20588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:18:46.432656   20588 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:18:46.433548   20588 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-323324"
	I1013 21:18:46.433589   20588 host.go:66] Checking if "addons-323324" exists ...
	I1013 21:18:46.433919   20588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:18:46.433951   20588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:18:46.434393   20588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43155
	I1013 21:18:46.434525   20588 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:18:46.434630   20588 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:18:46.434704   20588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:18:46.434740   20588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:18:46.435316   20588 main.go:141] libmachine: Using API Version  1
	I1013 21:18:46.435336   20588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:18:46.435394   20588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:18:46.435427   20588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:18:46.435606   20588 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:18:46.436021   20588 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:18:46.436815   20588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45735
	I1013 21:18:46.437090   20588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34029
	I1013 21:18:46.441673   20588 main.go:141] libmachine: Using API Version  1
	I1013 21:18:46.441702   20588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:18:46.442425   20588 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:18:46.443023   20588 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:18:46.443273   20588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:18:46.443312   20588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:18:46.443553   20588 main.go:141] libmachine: Using API Version  1
	I1013 21:18:46.443569   20588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:18:46.444188   20588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:18:46.444224   20588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:18:46.444401   20588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43643
	I1013 21:18:46.444425   20588 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:18:46.444945   20588 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:18:46.445429   20588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:18:46.445460   20588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:18:46.446135   20588 main.go:141] libmachine: Using API Version  1
	I1013 21:18:46.446165   20588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:18:46.446538   20588 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:18:46.447401   20588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44865
	I1013 21:18:46.448124   20588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:18:46.448210   20588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:18:46.448870   20588 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:18:46.449657   20588 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:18:46.450121   20588 main.go:141] libmachine: Using API Version  1
	I1013 21:18:46.450135   20588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:18:46.450523   20588 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:18:46.450686   20588 main.go:141] libmachine: (addons-323324) Calling .GetState
	I1013 21:18:46.451655   20588 main.go:141] libmachine: Using API Version  1
	I1013 21:18:46.451693   20588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:18:46.458441   20588 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:18:46.459223   20588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:18:46.459271   20588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:18:46.459310   20588 main.go:141] libmachine: (addons-323324) Calling .DriverName
	I1013 21:18:46.459582   20588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43943
	I1013 21:18:46.461861   20588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36185
	I1013 21:18:46.462067   20588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32907
	I1013 21:18:46.463657   20588 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:18:46.464474   20588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45095
	I1013 21:18:46.464625   20588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44835
	I1013 21:18:46.464759   20588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41315
	I1013 21:18:46.464812   20588 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1013 21:18:46.465362   20588 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:18:46.465730   20588 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:18:46.465884   20588 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1013 21:18:46.465919   20588 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1013 21:18:46.465940   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHHostname
	I1013 21:18:46.465959   20588 main.go:141] libmachine: Using API Version  1
	I1013 21:18:46.465975   20588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:18:46.466494   20588 main.go:141] libmachine: Using API Version  1
	I1013 21:18:46.466509   20588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:18:46.466581   20588 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:18:46.466851   20588 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:18:46.466972   20588 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:18:46.467106   20588 main.go:141] libmachine: Using API Version  1
	I1013 21:18:46.467117   20588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:18:46.467185   20588 main.go:141] libmachine: (addons-323324) Calling .GetState
	I1013 21:18:46.467369   20588 main.go:141] libmachine: (addons-323324) Calling .GetState
	I1013 21:18:46.469619   20588 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:18:46.470432   20588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:18:46.470470   20588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:18:46.470737   20588 main.go:141] libmachine: Using API Version  1
	I1013 21:18:46.470822   20588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:18:46.471213   20588 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:18:46.471727   20588 main.go:141] libmachine: (addons-323324) Calling .DriverName
	I1013 21:18:46.471806   20588 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:18:46.471873   20588 main.go:141] libmachine: (addons-323324) Calling .DriverName
	I1013 21:18:46.471996   20588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:18:46.472110   20588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:18:46.473088   20588 main.go:141] libmachine: Using API Version  1
	I1013 21:18:46.473110   20588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:18:46.473396   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:46.473459   20588 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:18:46.473922   20588 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1013 21:18:46.474011   20588 main.go:141] libmachine: (addons-323324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:03:23", ip: ""} in network mk-addons-323324: {Iface:virbr1 ExpiryTime:2025-10-13 22:18:17 +0000 UTC Type:0 Mac:52:54:00:28:03:23 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-323324 Clientid:01:52:54:00:28:03:23}
	I1013 21:18:46.474028   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined IP address 192.168.39.156 and MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:46.474068   20588 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:18:46.474307   20588 main.go:141] libmachine: (addons-323324) Calling .GetState
	I1013 21:18:46.474366   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHPort
	I1013 21:18:46.475087   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHKeyPath
	I1013 21:18:46.475231   20588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42201
	I1013 21:18:46.475367   20588 main.go:141] libmachine: Using API Version  1
	I1013 21:18:46.475380   20588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:18:46.475432   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHUsername
	I1013 21:18:46.475472   20588 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1013 21:18:46.475486   20588 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1013 21:18:46.475513   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHHostname
	I1013 21:18:46.475587   20588 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1013 21:18:46.475960   20588 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:18:46.476050   20588 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/addons-323324/id_rsa Username:docker}
	I1013 21:18:46.475893   20588 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:18:46.476568   20588 main.go:141] libmachine: Using API Version  1
	I1013 21:18:46.476586   20588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:18:46.476846   20588 host.go:66] Checking if "addons-323324" exists ...
	I1013 21:18:46.476876   20588 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1013 21:18:46.476986   20588 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1013 21:18:46.477003   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHHostname
	I1013 21:18:46.477211   20588 main.go:141] libmachine: (addons-323324) Calling .GetState
	I1013 21:18:46.479111   20588 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:18:46.479325   20588 main.go:141] libmachine: (addons-323324) Calling .GetState
	I1013 21:18:46.482274   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:46.483406   20588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:18:46.483532   20588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:18:46.485776   20588 main.go:141] libmachine: (addons-323324) Calling .DriverName
	I1013 21:18:46.486674   20588 main.go:141] libmachine: (addons-323324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:03:23", ip: ""} in network mk-addons-323324: {Iface:virbr1 ExpiryTime:2025-10-13 22:18:17 +0000 UTC Type:0 Mac:52:54:00:28:03:23 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-323324 Clientid:01:52:54:00:28:03:23}
	I1013 21:18:46.486695   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined IP address 192.168.39.156 and MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:46.487153   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHPort
	I1013 21:18:46.487746   20588 main.go:141] libmachine: (addons-323324) Calling .DriverName
	I1013 21:18:46.487951   20588 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1013 21:18:46.488143   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHKeyPath
	I1013 21:18:46.488485   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHUsername
	I1013 21:18:46.489217   20588 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1013 21:18:46.489233   20588 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1013 21:18:46.489251   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHHostname
	I1013 21:18:46.489456   20588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41875
	I1013 21:18:46.489989   20588 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1013 21:18:46.489943   20588 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/addons-323324/id_rsa Username:docker}
	I1013 21:18:46.490104   20588 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:18:46.490651   20588 main.go:141] libmachine: Using API Version  1
	I1013 21:18:46.490667   20588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:18:46.491035   20588 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:18:46.491417   20588 main.go:141] libmachine: (addons-323324) Calling .GetState
	I1013 21:18:46.491523   20588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40093
	I1013 21:18:46.492859   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:46.494427   20588 main.go:141] libmachine: (addons-323324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:03:23", ip: ""} in network mk-addons-323324: {Iface:virbr1 ExpiryTime:2025-10-13 22:18:17 +0000 UTC Type:0 Mac:52:54:00:28:03:23 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-323324 Clientid:01:52:54:00:28:03:23}
	I1013 21:18:46.494606   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined IP address 192.168.39.156 and MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:46.494824   20588 out.go:179]   - Using image docker.io/registry:3.0.0
	I1013 21:18:46.495142   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHPort
	I1013 21:18:46.495248   20588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45103
	I1013 21:18:46.495424   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:46.495656   20588 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:18:46.495867   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHKeyPath
	I1013 21:18:46.496392   20588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44309
	I1013 21:18:46.496419   20588 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:18:46.496877   20588 main.go:141] libmachine: Using API Version  1
	I1013 21:18:46.496895   20588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:18:46.496966   20588 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:18:46.497041   20588 main.go:141] libmachine: (addons-323324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:03:23", ip: ""} in network mk-addons-323324: {Iface:virbr1 ExpiryTime:2025-10-13 22:18:17 +0000 UTC Type:0 Mac:52:54:00:28:03:23 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-323324 Clientid:01:52:54:00:28:03:23}
	I1013 21:18:46.497056   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined IP address 192.168.39.156 and MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:46.497084   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHPort
	I1013 21:18:46.497251   20588 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1013 21:18:46.497266   20588 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1013 21:18:46.497283   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHHostname
	I1013 21:18:46.497290   20588 main.go:141] libmachine: (addons-323324) Calling .DriverName
	I1013 21:18:46.497484   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHKeyPath
	I1013 21:18:46.497645   20588 main.go:141] libmachine: Using API Version  1
	I1013 21:18:46.497658   20588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:18:46.497710   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHUsername
	I1013 21:18:46.498036   20588 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/addons-323324/id_rsa Username:docker}
	I1013 21:18:46.498233   20588 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:18:46.498443   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHUsername
	I1013 21:18:46.498616   20588 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/addons-323324/id_rsa Username:docker}
	I1013 21:18:46.499512   20588 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:18:46.499630   20588 main.go:141] libmachine: Using API Version  1
	I1013 21:18:46.499674   20588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:18:46.499981   20588 main.go:141] libmachine: (addons-323324) Calling .GetState
	I1013 21:18:46.500134   20588 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1013 21:18:46.500709   20588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:18:46.500756   20588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:18:46.500939   20588 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:18:46.501124   20588 main.go:141] libmachine: (addons-323324) Calling .GetState
	I1013 21:18:46.501699   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:46.502320   20588 main.go:141] libmachine: (addons-323324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:03:23", ip: ""} in network mk-addons-323324: {Iface:virbr1 ExpiryTime:2025-10-13 22:18:17 +0000 UTC Type:0 Mac:52:54:00:28:03:23 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-323324 Clientid:01:52:54:00:28:03:23}
	I1013 21:18:46.502420   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined IP address 192.168.39.156 and MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:46.503190   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHPort
	I1013 21:18:46.503280   20588 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1013 21:18:46.503296   20588 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1013 21:18:46.503315   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHHostname
	I1013 21:18:46.504139   20588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43725
	I1013 21:18:46.504394   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHKeyPath
	I1013 21:18:46.504560   20588 main.go:141] libmachine: (addons-323324) Calling .DriverName
	I1013 21:18:46.504922   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHUsername
	I1013 21:18:46.505008   20588 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:18:46.505238   20588 main.go:141] libmachine: (addons-323324) Calling .DriverName
	I1013 21:18:46.505812   20588 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/addons-323324/id_rsa Username:docker}
	I1013 21:18:46.506233   20588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36011
	I1013 21:18:46.506260   20588 main.go:141] libmachine: Using API Version  1
	I1013 21:18:46.506276   20588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:18:46.507245   20588 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:18:46.507491   20588 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1013 21:18:46.507509   20588 main.go:141] libmachine: (addons-323324) Calling .GetState
	I1013 21:18:46.507599   20588 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1013 21:18:46.508531   20588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38079
	I1013 21:18:46.508805   20588 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1013 21:18:46.508820   20588 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1013 21:18:46.508836   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHHostname
	I1013 21:18:46.509062   20588 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1013 21:18:46.509097   20588 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1013 21:18:46.509124   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHHostname
	I1013 21:18:46.510347   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:46.510407   20588 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:18:46.510505   20588 main.go:141] libmachine: (addons-323324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:03:23", ip: ""} in network mk-addons-323324: {Iface:virbr1 ExpiryTime:2025-10-13 22:18:17 +0000 UTC Type:0 Mac:52:54:00:28:03:23 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-323324 Clientid:01:52:54:00:28:03:23}
	I1013 21:18:46.510522   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined IP address 192.168.39.156 and MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:46.510555   20588 main.go:141] libmachine: (addons-323324) Calling .DriverName
	I1013 21:18:46.511299   20588 main.go:141] libmachine: Using API Version  1
	I1013 21:18:46.511316   20588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:18:46.511784   20588 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:18:46.511827   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHPort
	I1013 21:18:46.511977   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHKeyPath
	I1013 21:18:46.512033   20588 main.go:141] libmachine: (addons-323324) Calling .GetState
	I1013 21:18:46.512141   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHUsername
	I1013 21:18:46.512347   20588 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/addons-323324/id_rsa Username:docker}
	I1013 21:18:46.513524   20588 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1013 21:18:46.514326   20588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45323
	I1013 21:18:46.514483   20588 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:18:46.514648   20588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40051
	I1013 21:18:46.514841   20588 main.go:141] libmachine: (addons-323324) Calling .DriverName
	I1013 21:18:46.514905   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:46.515023   20588 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:18:46.515216   20588 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1013 21:18:46.515228   20588 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1013 21:18:46.515245   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHHostname
	I1013 21:18:46.515579   20588 main.go:141] libmachine: Using API Version  1
	I1013 21:18:46.515660   20588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:18:46.516132   20588 main.go:141] libmachine: Using API Version  1
	I1013 21:18:46.516169   20588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:18:46.516411   20588 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:18:46.516493   20588 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:18:46.516672   20588 main.go:141] libmachine: (addons-323324) Calling .GetState
	I1013 21:18:46.516830   20588 main.go:141] libmachine: (addons-323324) Calling .GetState
	I1013 21:18:46.517445   20588 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:18:46.517954   20588 main.go:141] libmachine: (addons-323324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:03:23", ip: ""} in network mk-addons-323324: {Iface:virbr1 ExpiryTime:2025-10-13 22:18:17 +0000 UTC Type:0 Mac:52:54:00:28:03:23 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-323324 Clientid:01:52:54:00:28:03:23}
	I1013 21:18:46.518019   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined IP address 192.168.39.156 and MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:46.518147   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:46.518174   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHPort
	I1013 21:18:46.518362   20588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38277
	I1013 21:18:46.518579   20588 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1013 21:18:46.518839   20588 main.go:141] libmachine: (addons-323324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:03:23", ip: ""} in network mk-addons-323324: {Iface:virbr1 ExpiryTime:2025-10-13 22:18:17 +0000 UTC Type:0 Mac:52:54:00:28:03:23 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-323324 Clientid:01:52:54:00:28:03:23}
	I1013 21:18:46.518886   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined IP address 192.168.39.156 and MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:46.519107   20588 main.go:141] libmachine: Using API Version  1
	I1013 21:18:46.519124   20588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:18:46.519130   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHKeyPath
	I1013 21:18:46.519224   20588 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:18:46.519651   20588 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:18:46.519746   20588 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1013 21:18:46.519762   20588 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1013 21:18:46.519779   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHHostname
	I1013 21:18:46.519859   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHUsername
	I1013 21:18:46.519915   20588 main.go:141] libmachine: (addons-323324) Calling .GetState
	I1013 21:18:46.520031   20588 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/addons-323324/id_rsa Username:docker}
	I1013 21:18:46.520505   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHPort
	I1013 21:18:46.520693   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHKeyPath
	I1013 21:18:46.520944   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHUsername
	I1013 21:18:46.521238   20588 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/addons-323324/id_rsa Username:docker}
	I1013 21:18:46.521575   20588 main.go:141] libmachine: (addons-323324) Calling .DriverName
	I1013 21:18:46.521622   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:46.521789   20588 main.go:141] libmachine: Using API Version  1
	I1013 21:18:46.521806   20588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:18:46.522594   20588 main.go:141] libmachine: (addons-323324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:03:23", ip: ""} in network mk-addons-323324: {Iface:virbr1 ExpiryTime:2025-10-13 22:18:17 +0000 UTC Type:0 Mac:52:54:00:28:03:23 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-323324 Clientid:01:52:54:00:28:03:23}
	I1013 21:18:46.522619   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined IP address 192.168.39.156 and MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:46.522773   20588 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:18:46.522774   20588 main.go:141] libmachine: (addons-323324) Calling .DriverName
	I1013 21:18:46.522830   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHPort
	I1013 21:18:46.523047   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHKeyPath
	I1013 21:18:46.523171   20588 main.go:141] libmachine: (addons-323324) Calling .GetState
	I1013 21:18:46.523245   20588 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1013 21:18:46.523309   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHUsername
	I1013 21:18:46.523484   20588 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 21:18:46.523496   20588 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 21:18:46.523539   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHHostname
	I1013 21:18:46.523564   20588 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/addons-323324/id_rsa Username:docker}
	I1013 21:18:46.524105   20588 main.go:141] libmachine: (addons-323324) Calling .DriverName
	I1013 21:18:46.525915   20588 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 21:18:46.525915   20588 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1013 21:18:46.526670   20588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42061
	I1013 21:18:46.526779   20588 main.go:141] libmachine: (addons-323324) Calling .DriverName
	I1013 21:18:46.527439   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:46.527474   20588 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:18:46.527564   20588 main.go:141] libmachine: Making call to close driver server
	I1013 21:18:46.527590   20588 main.go:141] libmachine: (addons-323324) Calling .Close
	I1013 21:18:46.527735   20588 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 21:18:46.527747   20588 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 21:18:46.527762   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHHostname
	I1013 21:18:46.528051   20588 main.go:141] libmachine: Using API Version  1
	I1013 21:18:46.528067   20588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:18:46.528115   20588 main.go:141] libmachine: (addons-323324) DBG | Closing plugin on server side
	I1013 21:18:46.528152   20588 main.go:141] libmachine: (addons-323324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:03:23", ip: ""} in network mk-addons-323324: {Iface:virbr1 ExpiryTime:2025-10-13 22:18:17 +0000 UTC Type:0 Mac:52:54:00:28:03:23 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-323324 Clientid:01:52:54:00:28:03:23}
	I1013 21:18:46.528192   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined IP address 192.168.39.156 and MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:46.528744   20588 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1013 21:18:46.528896   20588 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:18:46.528949   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHPort
	I1013 21:18:46.529207   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHKeyPath
	I1013 21:18:46.529414   20588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42479
	I1013 21:18:46.529444   20588 main.go:141] libmachine: Successfully made call to close driver server
	I1013 21:18:46.529456   20588 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 21:18:46.529580   20588 main.go:141] libmachine: Making call to close driver server
	I1013 21:18:46.529802   20588 main.go:141] libmachine: (addons-323324) Calling .Close
	I1013 21:18:46.529807   20588 main.go:141] libmachine: (addons-323324) Calling .GetState
	I1013 21:18:46.529467   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHUsername
	I1013 21:18:46.529974   20588 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/addons-323324/id_rsa Username:docker}
	I1013 21:18:46.530202   20588 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1013 21:18:46.530217   20588 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1013 21:18:46.530234   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHHostname
	I1013 21:18:46.530541   20588 main.go:141] libmachine: (addons-323324) DBG | Closing plugin on server side
	I1013 21:18:46.530572   20588 main.go:141] libmachine: Successfully made call to close driver server
	I1013 21:18:46.530580   20588 main.go:141] libmachine: Making call to close connection to plugin binary
	W1013 21:18:46.530647   20588 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1013 21:18:46.531263   20588 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:18:46.532381   20588 main.go:141] libmachine: Using API Version  1
	I1013 21:18:46.532449   20588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:18:46.532465   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:46.532561   20588 main.go:141] libmachine: (addons-323324) Calling .DriverName
	I1013 21:18:46.533216   20588 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:18:46.533306   20588 main.go:141] libmachine: (addons-323324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:03:23", ip: ""} in network mk-addons-323324: {Iface:virbr1 ExpiryTime:2025-10-13 22:18:17 +0000 UTC Type:0 Mac:52:54:00:28:03:23 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-323324 Clientid:01:52:54:00:28:03:23}
	I1013 21:18:46.533364   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined IP address 192.168.39.156 and MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:46.533455   20588 main.go:141] libmachine: (addons-323324) Calling .GetState
	I1013 21:18:46.534139   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHPort
	I1013 21:18:46.534296   20588 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1013 21:18:46.534359   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHKeyPath
	I1013 21:18:46.534536   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHUsername
	I1013 21:18:46.534743   20588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44959
	I1013 21:18:46.534823   20588 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/addons-323324/id_rsa Username:docker}
	I1013 21:18:46.535277   20588 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:18:46.535334   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:46.535579   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:46.535776   20588 main.go:141] libmachine: Using API Version  1
	I1013 21:18:46.535794   20588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:18:46.535826   20588 main.go:141] libmachine: (addons-323324) Calling .DriverName
	I1013 21:18:46.535826   20588 main.go:141] libmachine: (addons-323324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:03:23", ip: ""} in network mk-addons-323324: {Iface:virbr1 ExpiryTime:2025-10-13 22:18:17 +0000 UTC Type:0 Mac:52:54:00:28:03:23 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-323324 Clientid:01:52:54:00:28:03:23}
	I1013 21:18:46.535843   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined IP address 192.168.39.156 and MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:46.536084   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHPort
	I1013 21:18:46.536186   20588 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:18:46.536281   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHKeyPath
	I1013 21:18:46.536449   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHUsername
	I1013 21:18:46.536450   20588 main.go:141] libmachine: (addons-323324) Calling .DriverName
	I1013 21:18:46.536528   20588 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1013 21:18:46.536627   20588 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/addons-323324/id_rsa Username:docker}
	I1013 21:18:46.536658   20588 main.go:141] libmachine: (addons-323324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:03:23", ip: ""} in network mk-addons-323324: {Iface:virbr1 ExpiryTime:2025-10-13 22:18:17 +0000 UTC Type:0 Mac:52:54:00:28:03:23 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-323324 Clientid:01:52:54:00:28:03:23}
	I1013 21:18:46.536919   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined IP address 192.168.39.156 and MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:46.537028   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHPort
	I1013 21:18:46.537231   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHKeyPath
	I1013 21:18:46.537426   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHUsername
	I1013 21:18:46.537585   20588 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/addons-323324/id_rsa Username:docker}
	I1013 21:18:46.537943   20588 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1013 21:18:46.539397   20588 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1013 21:18:46.540595   20588 out.go:179]   - Using image docker.io/busybox:stable
	I1013 21:18:46.541596   20588 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1013 21:18:46.541630   20588 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1013 21:18:46.541641   20588 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1013 21:18:46.541656   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHHostname
	I1013 21:18:46.543293   20588 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1013 21:18:46.544463   20588 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1013 21:18:46.545415   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:46.545982   20588 main.go:141] libmachine: (addons-323324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:03:23", ip: ""} in network mk-addons-323324: {Iface:virbr1 ExpiryTime:2025-10-13 22:18:17 +0000 UTC Type:0 Mac:52:54:00:28:03:23 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-323324 Clientid:01:52:54:00:28:03:23}
	I1013 21:18:46.546009   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined IP address 192.168.39.156 and MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:46.546205   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHPort
	I1013 21:18:46.546379   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHKeyPath
	I1013 21:18:46.546555   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHUsername
	I1013 21:18:46.546561   20588 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1013 21:18:46.546703   20588 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/addons-323324/id_rsa Username:docker}
	I1013 21:18:46.548712   20588 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1013 21:18:46.549741   20588 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1013 21:18:46.549755   20588 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1013 21:18:46.549769   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHHostname
	I1013 21:18:46.553404   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:46.553937   20588 main.go:141] libmachine: (addons-323324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:03:23", ip: ""} in network mk-addons-323324: {Iface:virbr1 ExpiryTime:2025-10-13 22:18:17 +0000 UTC Type:0 Mac:52:54:00:28:03:23 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-323324 Clientid:01:52:54:00:28:03:23}
	I1013 21:18:46.553963   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined IP address 192.168.39.156 and MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:46.554130   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHPort
	I1013 21:18:46.554328   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHKeyPath
	I1013 21:18:46.554495   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHUsername
	I1013 21:18:46.554635   20588 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/addons-323324/id_rsa Username:docker}
	W1013 21:18:46.769996   20588 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:41296->192.168.39.156:22: read: connection reset by peer
	I1013 21:18:46.770032   20588 retry.go:31] will retry after 259.28355ms: ssh: handshake failed: read tcp 192.168.39.1:41296->192.168.39.156:22: read: connection reset by peer
	W1013 21:18:46.770980   20588 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:41310->192.168.39.156:22: read: connection reset by peer
	I1013 21:18:46.771016   20588 retry.go:31] will retry after 231.764989ms: ssh: handshake failed: read tcp 192.168.39.1:41310->192.168.39.156:22: read: connection reset by peer
	I1013 21:18:47.078435   20588 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1013 21:18:47.078473   20588 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1013 21:18:47.086563   20588 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 21:18:47.086616   20588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1013 21:18:47.128008   20588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1013 21:18:47.251263   20588 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1013 21:18:47.251285   20588 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1013 21:18:47.301899   20588 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:18:47.301923   20588 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1013 21:18:47.316560   20588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1013 21:18:47.334636   20588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1013 21:18:47.335974   20588 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1013 21:18:47.335992   20588 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1013 21:18:47.340678   20588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1013 21:18:47.400589   20588 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1013 21:18:47.400622   20588 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1013 21:18:47.446825   20588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1013 21:18:47.481543   20588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1013 21:18:47.493968   20588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 21:18:47.511425   20588 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1013 21:18:47.511445   20588 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1013 21:18:47.620211   20588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 21:18:47.685177   20588 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1013 21:18:47.685209   20588 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1013 21:18:47.965311   20588 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1013 21:18:47.965339   20588 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1013 21:18:47.992092   20588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:18:47.999026   20588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1013 21:18:48.022038   20588 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1013 21:18:48.022072   20588 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1013 21:18:48.123799   20588 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1013 21:18:48.123824   20588 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1013 21:18:48.195425   20588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1013 21:18:48.268594   20588 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1013 21:18:48.268621   20588 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1013 21:18:48.706487   20588 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1013 21:18:48.706512   20588 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1013 21:18:48.818745   20588 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1013 21:18:48.818767   20588 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1013 21:18:48.978630   20588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1013 21:18:48.999094   20588 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1013 21:18:48.999117   20588 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1013 21:18:49.213151   20588 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1013 21:18:49.213186   20588 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1013 21:18:49.229552   20588 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1013 21:18:49.229570   20588 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1013 21:18:49.485885   20588 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1013 21:18:49.485909   20588 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1013 21:18:49.732949   20588 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1013 21:18:49.732977   20588 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1013 21:18:49.887883   20588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1013 21:18:50.082209   20588 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1013 21:18:50.082233   20588 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1013 21:18:50.136617   20588 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1013 21:18:50.136642   20588 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1013 21:18:50.708247   20588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1013 21:18:50.757750   20588 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1013 21:18:50.757779   20588 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1013 21:18:50.915686   20588 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.829038073s)
	I1013 21:18:50.915722   20588 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1013 21:18:50.915759   20588 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.829171131s)
	I1013 21:18:50.915874   20588 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.787820082s)
	I1013 21:18:50.915922   20588 main.go:141] libmachine: Making call to close driver server
	I1013 21:18:50.915935   20588 main.go:141] libmachine: (addons-323324) Calling .Close
	I1013 21:18:50.916275   20588 main.go:141] libmachine: Successfully made call to close driver server
	I1013 21:18:50.916301   20588 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 21:18:50.916318   20588 main.go:141] libmachine: Making call to close driver server
	I1013 21:18:50.916326   20588 main.go:141] libmachine: (addons-323324) Calling .Close
	I1013 21:18:50.916572   20588 main.go:141] libmachine: Successfully made call to close driver server
	I1013 21:18:50.916590   20588 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 21:18:50.916629   20588 node_ready.go:35] waiting up to 6m0s for node "addons-323324" to be "Ready" ...
	I1013 21:18:50.933073   20588 node_ready.go:49] node "addons-323324" is "Ready"
	I1013 21:18:50.933099   20588 node_ready.go:38] duration metric: took 16.429504ms for node "addons-323324" to be "Ready" ...
	I1013 21:18:50.933112   20588 api_server.go:52] waiting for apiserver process to appear ...
	I1013 21:18:50.933166   20588 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 21:18:51.063372   20588 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1013 21:18:51.063397   20588 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1013 21:18:51.422388   20588 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-323324" context rescaled to 1 replicas
	I1013 21:18:51.476967   20588 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1013 21:18:51.476988   20588 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1013 21:18:51.748549   20588 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1013 21:18:51.748569   20588 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1013 21:18:52.379912   20588 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1013 21:18:52.379941   20588 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1013 21:18:52.820273   20588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1013 21:18:53.033501   20588 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.698830333s)
	I1013 21:18:53.033537   20588 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.692837304s)
	I1013 21:18:53.033556   20588 main.go:141] libmachine: Making call to close driver server
	I1013 21:18:53.033566   20588 main.go:141] libmachine: (addons-323324) Calling .Close
	I1013 21:18:53.033590   20588 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.586728268s)
	I1013 21:18:53.033554   20588 main.go:141] libmachine: Making call to close driver server
	I1013 21:18:53.033633   20588 main.go:141] libmachine: Making call to close driver server
	I1013 21:18:53.033642   20588 main.go:141] libmachine: (addons-323324) Calling .Close
	I1013 21:18:53.033645   20588 main.go:141] libmachine: (addons-323324) Calling .Close
	I1013 21:18:53.033862   20588 main.go:141] libmachine: Successfully made call to close driver server
	I1013 21:18:53.033877   20588 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 21:18:53.033886   20588 main.go:141] libmachine: Making call to close driver server
	I1013 21:18:53.033893   20588 main.go:141] libmachine: (addons-323324) Calling .Close
	I1013 21:18:53.033970   20588 main.go:141] libmachine: (addons-323324) DBG | Closing plugin on server side
	I1013 21:18:53.034007   20588 main.go:141] libmachine: Successfully made call to close driver server
	I1013 21:18:53.034015   20588 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 21:18:53.034023   20588 main.go:141] libmachine: Making call to close driver server
	I1013 21:18:53.034024   20588 main.go:141] libmachine: (addons-323324) DBG | Closing plugin on server side
	I1013 21:18:53.034030   20588 main.go:141] libmachine: (addons-323324) Calling .Close
	I1013 21:18:53.034059   20588 main.go:141] libmachine: Successfully made call to close driver server
	I1013 21:18:53.034067   20588 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 21:18:53.034074   20588 main.go:141] libmachine: Making call to close driver server
	I1013 21:18:53.034081   20588 main.go:141] libmachine: (addons-323324) Calling .Close
	I1013 21:18:53.034262   20588 main.go:141] libmachine: Successfully made call to close driver server
	I1013 21:18:53.034279   20588 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 21:18:53.034760   20588 main.go:141] libmachine: (addons-323324) DBG | Closing plugin on server side
	I1013 21:18:53.034802   20588 main.go:141] libmachine: Successfully made call to close driver server
	I1013 21:18:53.034809   20588 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 21:18:53.034946   20588 main.go:141] libmachine: Successfully made call to close driver server
	I1013 21:18:53.034964   20588 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 21:18:53.038109   20588 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.721513127s)
	I1013 21:18:53.038152   20588 main.go:141] libmachine: Making call to close driver server
	I1013 21:18:53.038185   20588 main.go:141] libmachine: (addons-323324) Calling .Close
	I1013 21:18:53.038430   20588 main.go:141] libmachine: Successfully made call to close driver server
	I1013 21:18:53.038456   20588 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 21:18:53.038464   20588 main.go:141] libmachine: Making call to close driver server
	I1013 21:18:53.038472   20588 main.go:141] libmachine: (addons-323324) Calling .Close
	I1013 21:18:53.038674   20588 main.go:141] libmachine: Successfully made call to close driver server
	I1013 21:18:53.038692   20588 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 21:18:54.020915   20588 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1013 21:18:54.020959   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHHostname
	I1013 21:18:54.025098   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:54.025690   20588 main.go:141] libmachine: (addons-323324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:03:23", ip: ""} in network mk-addons-323324: {Iface:virbr1 ExpiryTime:2025-10-13 22:18:17 +0000 UTC Type:0 Mac:52:54:00:28:03:23 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-323324 Clientid:01:52:54:00:28:03:23}
	I1013 21:18:54.025726   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined IP address 192.168.39.156 and MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:54.025975   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHPort
	I1013 21:18:54.026216   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHKeyPath
	I1013 21:18:54.026409   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHUsername
	I1013 21:18:54.026559   20588 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/addons-323324/id_rsa Username:docker}
	I1013 21:18:54.400653   20588 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1013 21:18:54.577961   20588 addons.go:238] Setting addon gcp-auth=true in "addons-323324"
	I1013 21:18:54.578013   20588 host.go:66] Checking if "addons-323324" exists ...
	I1013 21:18:54.578322   20588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:18:54.578355   20588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:18:54.591848   20588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38373
	I1013 21:18:54.592399   20588 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:18:54.592883   20588 main.go:141] libmachine: Using API Version  1
	I1013 21:18:54.592908   20588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:18:54.593317   20588 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:18:54.593782   20588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:18:54.593812   20588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:18:54.607846   20588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39895
	I1013 21:18:54.608542   20588 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:18:54.609172   20588 main.go:141] libmachine: Using API Version  1
	I1013 21:18:54.609198   20588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:18:54.609602   20588 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:18:54.609888   20588 main.go:141] libmachine: (addons-323324) Calling .GetState
	I1013 21:18:54.612037   20588 main.go:141] libmachine: (addons-323324) Calling .DriverName
	I1013 21:18:54.612315   20588 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1013 21:18:54.612340   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHHostname
	I1013 21:18:54.616049   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:54.616509   20588 main.go:141] libmachine: (addons-323324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:03:23", ip: ""} in network mk-addons-323324: {Iface:virbr1 ExpiryTime:2025-10-13 22:18:17 +0000 UTC Type:0 Mac:52:54:00:28:03:23 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-323324 Clientid:01:52:54:00:28:03:23}
	I1013 21:18:54.616541   20588 main.go:141] libmachine: (addons-323324) DBG | domain addons-323324 has defined IP address 192.168.39.156 and MAC address 52:54:00:28:03:23 in network mk-addons-323324
	I1013 21:18:54.616723   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHPort
	I1013 21:18:54.616949   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHKeyPath
	I1013 21:18:54.617183   20588 main.go:141] libmachine: (addons-323324) Calling .GetSSHUsername
	I1013 21:18:54.617375   20588 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/addons-323324/id_rsa Username:docker}
	I1013 21:18:55.966084   20588 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.484506005s)
	I1013 21:18:55.966134   20588 main.go:141] libmachine: Making call to close driver server
	I1013 21:18:55.966146   20588 main.go:141] libmachine: (addons-323324) Calling .Close
	I1013 21:18:55.966190   20588 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.472173447s)
	I1013 21:18:55.966230   20588 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.345990702s)
	I1013 21:18:55.966250   20588 main.go:141] libmachine: Making call to close driver server
	I1013 21:18:55.966260   20588 main.go:141] libmachine: (addons-323324) Calling .Close
	I1013 21:18:55.966231   20588 main.go:141] libmachine: Making call to close driver server
	I1013 21:18:55.966305   20588 main.go:141] libmachine: (addons-323324) Calling .Close
	I1013 21:18:55.966393   20588 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.974267435s)
	I1013 21:18:55.966409   20588 main.go:141] libmachine: Successfully made call to close driver server
	I1013 21:18:55.966423   20588 main.go:141] libmachine: Making call to close connection to plugin binary
	W1013 21:18:55.966428   20588 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:18:55.966450   20588 retry.go:31] will retry after 281.973172ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:18:55.966433   20588 main.go:141] libmachine: Making call to close driver server
	I1013 21:18:55.966579   20588 main.go:141] libmachine: Successfully made call to close driver server
	I1013 21:18:55.966580   20588 main.go:141] libmachine: (addons-323324) DBG | Closing plugin on server side
	I1013 21:18:55.966589   20588 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 21:18:55.966597   20588 main.go:141] libmachine: Making call to close driver server
	I1013 21:18:55.966600   20588 main.go:141] libmachine: Successfully made call to close driver server
	I1013 21:18:55.966604   20588 main.go:141] libmachine: (addons-323324) Calling .Close
	I1013 21:18:55.966611   20588 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 21:18:55.966619   20588 main.go:141] libmachine: Making call to close driver server
	I1013 21:18:55.966626   20588 main.go:141] libmachine: (addons-323324) Calling .Close
	I1013 21:18:55.966629   20588 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.967569606s)
	I1013 21:18:55.966646   20588 main.go:141] libmachine: Making call to close driver server
	I1013 21:18:55.966655   20588 main.go:141] libmachine: (addons-323324) Calling .Close
	I1013 21:18:55.966716   20588 main.go:141] libmachine: (addons-323324) Calling .Close
	I1013 21:18:55.966730   20588 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.771283367s)
	I1013 21:18:55.967003   20588 main.go:141] libmachine: Making call to close driver server
	I1013 21:18:55.967021   20588 main.go:141] libmachine: (addons-323324) Calling .Close
	I1013 21:18:55.966799   20588 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.988143634s)
	I1013 21:18:55.967079   20588 main.go:141] libmachine: Making call to close driver server
	I1013 21:18:55.967087   20588 main.go:141] libmachine: (addons-323324) Calling .Close
	I1013 21:18:55.966835   20588 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.0789261s)
	I1013 21:18:55.967133   20588 main.go:141] libmachine: Making call to close driver server
	I1013 21:18:55.967143   20588 main.go:141] libmachine: (addons-323324) Calling .Close
	I1013 21:18:55.967389   20588 main.go:141] libmachine: (addons-323324) DBG | Closing plugin on server side
	I1013 21:18:55.967458   20588 main.go:141] libmachine: (addons-323324) DBG | Closing plugin on server side
	I1013 21:18:55.967477   20588 main.go:141] libmachine: Successfully made call to close driver server
	I1013 21:18:55.967482   20588 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 21:18:55.967528   20588 main.go:141] libmachine: (addons-323324) DBG | Closing plugin on server side
	I1013 21:18:55.967542   20588 main.go:141] libmachine: (addons-323324) DBG | Closing plugin on server side
	I1013 21:18:55.967565   20588 main.go:141] libmachine: Successfully made call to close driver server
	I1013 21:18:55.967773   20588 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 21:18:55.967782   20588 main.go:141] libmachine: Making call to close driver server
	I1013 21:18:55.967789   20588 main.go:141] libmachine: (addons-323324) Calling .Close
	I1013 21:18:55.967570   20588 main.go:141] libmachine: Successfully made call to close driver server
	I1013 21:18:55.967843   20588 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 21:18:55.967852   20588 addons.go:479] Verifying addon ingress=true in "addons-323324"
	I1013 21:18:55.967958   20588 main.go:141] libmachine: Successfully made call to close driver server
	I1013 21:18:55.967970   20588 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 21:18:55.967583   20588 main.go:141] libmachine: (addons-323324) DBG | Closing plugin on server side
	I1013 21:18:55.967596   20588 main.go:141] libmachine: (addons-323324) DBG | Closing plugin on server side
	I1013 21:18:55.967600   20588 main.go:141] libmachine: Successfully made call to close driver server
	I1013 21:18:55.968516   20588 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 21:18:55.968525   20588 main.go:141] libmachine: Making call to close driver server
	I1013 21:18:55.968523   20588 main.go:141] libmachine: Successfully made call to close driver server
	I1013 21:18:55.968532   20588 main.go:141] libmachine: (addons-323324) Calling .Close
	I1013 21:18:55.968540   20588 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 21:18:55.968550   20588 main.go:141] libmachine: Making call to close driver server
	I1013 21:18:55.968558   20588 main.go:141] libmachine: (addons-323324) Calling .Close
	I1013 21:18:55.967623   20588 main.go:141] libmachine: Successfully made call to close driver server
	I1013 21:18:55.968576   20588 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 21:18:55.967669   20588 main.go:141] libmachine: Successfully made call to close driver server
	I1013 21:18:55.968806   20588 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 21:18:55.968815   20588 main.go:141] libmachine: Making call to close driver server
	I1013 21:18:55.968827   20588 main.go:141] libmachine: (addons-323324) Calling .Close
	I1013 21:18:55.967733   20588 main.go:141] libmachine: (addons-323324) DBG | Closing plugin on server side
	I1013 21:18:55.969889   20588 main.go:141] libmachine: (addons-323324) DBG | Closing plugin on server side
	I1013 21:18:55.969918   20588 main.go:141] libmachine: Successfully made call to close driver server
	I1013 21:18:55.969924   20588 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 21:18:55.969931   20588 addons.go:479] Verifying addon metrics-server=true in "addons-323324"
	I1013 21:18:55.970761   20588 main.go:141] libmachine: (addons-323324) DBG | Closing plugin on server side
	I1013 21:18:55.970790   20588 main.go:141] libmachine: (addons-323324) DBG | Closing plugin on server side
	I1013 21:18:55.970818   20588 main.go:141] libmachine: Successfully made call to close driver server
	I1013 21:18:55.970830   20588 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 21:18:55.970844   20588 addons.go:479] Verifying addon registry=true in "addons-323324"
	I1013 21:18:55.970873   20588 main.go:141] libmachine: Successfully made call to close driver server
	I1013 21:18:55.970909   20588 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 21:18:55.972338   20588 out.go:179] * Verifying ingress addon...
	I1013 21:18:55.973262   20588 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-323324 service yakd-dashboard -n yakd-dashboard
	
	I1013 21:18:55.973267   20588 out.go:179] * Verifying registry addon...
	I1013 21:18:55.975077   20588 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1013 21:18:55.975938   20588 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1013 21:18:56.088603   20588 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1013 21:18:56.088634   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:18:56.088683   20588 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1013 21:18:56.088699   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:18:56.146452   20588 main.go:141] libmachine: Making call to close driver server
	I1013 21:18:56.146472   20588 main.go:141] libmachine: (addons-323324) Calling .Close
	I1013 21:18:56.146773   20588 main.go:141] libmachine: Successfully made call to close driver server
	I1013 21:18:56.146792   20588 main.go:141] libmachine: Making call to close connection to plugin binary
	W1013 21:18:56.146892   20588 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1013 21:18:56.183907   20588 main.go:141] libmachine: Making call to close driver server
	I1013 21:18:56.183932   20588 main.go:141] libmachine: (addons-323324) Calling .Close
	I1013 21:18:56.184377   20588 main.go:141] libmachine: Successfully made call to close driver server
	I1013 21:18:56.184400   20588 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 21:18:56.184420   20588 main.go:141] libmachine: (addons-323324) DBG | Closing plugin on server side
	I1013 21:18:56.249317   20588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:18:56.520652   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:18:56.520802   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:18:56.769737   20588 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.06144016s)
	W1013 21:18:56.769803   20588 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1013 21:18:56.769825   20588 retry.go:31] will retry after 225.970408ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1013 21:18:56.769759   20588 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.836571441s)
	I1013 21:18:56.769855   20588 api_server.go:72] duration metric: took 10.404079291s to wait for apiserver process to appear ...
	I1013 21:18:56.769869   20588 api_server.go:88] waiting for apiserver healthz status ...
	I1013 21:18:56.769886   20588 api_server.go:253] Checking apiserver healthz at https://192.168.39.156:8443/healthz ...
	I1013 21:18:56.780873   20588 api_server.go:279] https://192.168.39.156:8443/healthz returned 200:
	ok
	I1013 21:18:56.785122   20588 api_server.go:141] control plane version: v1.34.1
	I1013 21:18:56.785148   20588 api_server.go:131] duration metric: took 15.271328ms to wait for apiserver health ...
	I1013 21:18:56.785170   20588 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 21:18:56.804019   20588 system_pods.go:59] 16 kube-system pods found
	I1013 21:18:56.804086   20588 system_pods.go:61] "amd-gpu-device-plugin-8jt96" [706242ca-d40e-473a-a4e2-1a246383bdee] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1013 21:18:56.804101   20588 system_pods.go:61] "coredns-66bc5c9577-pwpxp" [6bcc5c7d-ab40-4c59-93d1-c7f45aa62b3a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 21:18:56.804115   20588 system_pods.go:61] "coredns-66bc5c9577-wv5p2" [c688f0fa-21e6-4dd1-afc3-960d81cd40f6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 21:18:56.804128   20588 system_pods.go:61] "etcd-addons-323324" [85f30da2-dd71-4325-9660-b7e92f70250a] Running
	I1013 21:18:56.804138   20588 system_pods.go:61] "kube-apiserver-addons-323324" [b1094db0-7523-49c9-93fb-1f731f4b205e] Running
	I1013 21:18:56.804150   20588 system_pods.go:61] "kube-controller-manager-addons-323324" [7de0b3a0-a242-431c-bfca-73adef7d3ec4] Running
	I1013 21:18:56.804189   20588 system_pods.go:61] "kube-ingress-dns-minikube" [717463b5-c304-408e-a154-1901a00a3c52] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1013 21:18:56.804202   20588 system_pods.go:61] "kube-proxy-gpl4b" [d2dab544-4519-453a-bc2e-b1a5738a7f90] Running
	I1013 21:18:56.804209   20588 system_pods.go:61] "kube-scheduler-addons-323324" [653b4eee-d237-483f-ba19-ebb642dc9061] Running
	I1013 21:18:56.804217   20588 system_pods.go:61] "metrics-server-85b7d694d7-9l7cd" [a4b023ce-1b41-417f-9b68-195c1d98b084] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 21:18:56.804225   20588 system_pods.go:61] "nvidia-device-plugin-daemonset-4hznp" [a270c687-0bcb-46d7-8ef1-81523f6ef017] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1013 21:18:56.804237   20588 system_pods.go:61] "registry-6b586f9694-n6l2x" [bfe55504-a420-43d7-8ce8-5e3ac252cb0a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1013 21:18:56.804246   20588 system_pods.go:61] "registry-creds-764b6fb674-dqqwl" [bfde70c2-ccfa-4341-8a14-38a59d75c104] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1013 21:18:56.804259   20588 system_pods.go:61] "registry-proxy-l6gn2" [7956ef83-7889-4dd9-90e1-84cc5079dd16] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1013 21:18:56.804264   20588 system_pods.go:61] "snapshot-controller-7d9fbc56b8-l879d" [b4d7529c-48a6-4e1c-b193-c896ddc5b727] Pending
	I1013 21:18:56.804272   20588 system_pods.go:61] "storage-provisioner" [a8c4eb07-bd1b-424f-b704-fe3c84d248bf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 21:18:56.804281   20588 system_pods.go:74] duration metric: took 19.103049ms to wait for pod list to return data ...
	I1013 21:18:56.804294   20588 default_sa.go:34] waiting for default service account to be created ...
	I1013 21:18:56.822902   20588 default_sa.go:45] found service account: "default"
	I1013 21:18:56.822937   20588 default_sa.go:55] duration metric: took 18.635807ms for default service account to be created ...
	I1013 21:18:56.822950   20588 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 21:18:56.885764   20588 system_pods.go:86] 17 kube-system pods found
	I1013 21:18:56.885799   20588 system_pods.go:89] "amd-gpu-device-plugin-8jt96" [706242ca-d40e-473a-a4e2-1a246383bdee] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1013 21:18:56.885811   20588 system_pods.go:89] "coredns-66bc5c9577-pwpxp" [6bcc5c7d-ab40-4c59-93d1-c7f45aa62b3a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 21:18:56.885823   20588 system_pods.go:89] "coredns-66bc5c9577-wv5p2" [c688f0fa-21e6-4dd1-afc3-960d81cd40f6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 21:18:56.885829   20588 system_pods.go:89] "etcd-addons-323324" [85f30da2-dd71-4325-9660-b7e92f70250a] Running
	I1013 21:18:56.885836   20588 system_pods.go:89] "kube-apiserver-addons-323324" [b1094db0-7523-49c9-93fb-1f731f4b205e] Running
	I1013 21:18:56.885841   20588 system_pods.go:89] "kube-controller-manager-addons-323324" [7de0b3a0-a242-431c-bfca-73adef7d3ec4] Running
	I1013 21:18:56.885855   20588 system_pods.go:89] "kube-ingress-dns-minikube" [717463b5-c304-408e-a154-1901a00a3c52] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1013 21:18:56.885860   20588 system_pods.go:89] "kube-proxy-gpl4b" [d2dab544-4519-453a-bc2e-b1a5738a7f90] Running
	I1013 21:18:56.885867   20588 system_pods.go:89] "kube-scheduler-addons-323324" [653b4eee-d237-483f-ba19-ebb642dc9061] Running
	I1013 21:18:56.885879   20588 system_pods.go:89] "metrics-server-85b7d694d7-9l7cd" [a4b023ce-1b41-417f-9b68-195c1d98b084] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 21:18:56.885891   20588 system_pods.go:89] "nvidia-device-plugin-daemonset-4hznp" [a270c687-0bcb-46d7-8ef1-81523f6ef017] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1013 21:18:56.885906   20588 system_pods.go:89] "registry-6b586f9694-n6l2x" [bfe55504-a420-43d7-8ce8-5e3ac252cb0a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1013 21:18:56.885919   20588 system_pods.go:89] "registry-creds-764b6fb674-dqqwl" [bfde70c2-ccfa-4341-8a14-38a59d75c104] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1013 21:18:56.885927   20588 system_pods.go:89] "registry-proxy-l6gn2" [7956ef83-7889-4dd9-90e1-84cc5079dd16] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1013 21:18:56.885933   20588 system_pods.go:89] "snapshot-controller-7d9fbc56b8-687t9" [677addb4-80d1-4385-a85a-0b7d0b97fef6] Pending
	I1013 21:18:56.885941   20588 system_pods.go:89] "snapshot-controller-7d9fbc56b8-l879d" [b4d7529c-48a6-4e1c-b193-c896ddc5b727] Pending
	I1013 21:18:56.885949   20588 system_pods.go:89] "storage-provisioner" [a8c4eb07-bd1b-424f-b704-fe3c84d248bf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 21:18:56.885960   20588 system_pods.go:126] duration metric: took 63.002013ms to wait for k8s-apps to be running ...
	I1013 21:18:56.885973   20588 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 21:18:56.886027   20588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 21:18:56.996257   20588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1013 21:18:57.039415   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:18:57.039639   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:18:57.506101   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:18:57.507281   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:18:57.985929   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:18:57.994456   20588 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.174132046s)
	I1013 21:18:57.994513   20588 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.382175511s)
	I1013 21:18:57.994513   20588 main.go:141] libmachine: Making call to close driver server
	I1013 21:18:57.994661   20588 main.go:141] libmachine: (addons-323324) Calling .Close
	I1013 21:18:57.995022   20588 main.go:141] libmachine: Successfully made call to close driver server
	I1013 21:18:57.995039   20588 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 21:18:57.995043   20588 main.go:141] libmachine: (addons-323324) DBG | Closing plugin on server side
	I1013 21:18:57.995054   20588 main.go:141] libmachine: Making call to close driver server
	I1013 21:18:57.995065   20588 main.go:141] libmachine: (addons-323324) Calling .Close
	I1013 21:18:57.995298   20588 main.go:141] libmachine: Successfully made call to close driver server
	I1013 21:18:57.995311   20588 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 21:18:57.995322   20588 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-323324"
	I1013 21:18:57.995912   20588 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1013 21:18:57.996798   20588 out.go:179] * Verifying csi-hostpath-driver addon...
	I1013 21:18:57.998403   20588 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1013 21:18:57.999023   20588 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1013 21:18:57.999792   20588 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1013 21:18:57.999815   20588 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1013 21:18:58.009837   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:18:58.048511   20588 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1013 21:18:58.048535   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:18:58.254582   20588 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1013 21:18:58.254609   20588 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1013 21:18:58.363352   20588 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1013 21:18:58.363374   20588 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1013 21:18:58.484095   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:18:58.484264   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:18:58.583668   20588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1013 21:18:58.584522   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:18:58.983788   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:18:58.986123   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:18:59.007614   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:18:59.483371   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:18:59.483552   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:18:59.509709   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:18:59.981220   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:18:59.982372   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:00.007099   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:00.217896   20588 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.968536317s)
	W1013 21:19:00.217945   20588 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:00.217955   20588 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.331901871s)
	I1013 21:19:00.217968   20588 retry.go:31] will retry after 430.222294ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:00.217987   20588 system_svc.go:56] duration metric: took 3.332010673s WaitForService to wait for kubelet
	I1013 21:19:00.217998   20588 kubeadm.go:586] duration metric: took 13.852222891s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 21:19:00.218026   20588 node_conditions.go:102] verifying NodePressure condition ...
	I1013 21:19:00.218034   20588 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.221743658s)
	I1013 21:19:00.218071   20588 main.go:141] libmachine: Making call to close driver server
	I1013 21:19:00.218086   20588 main.go:141] libmachine: (addons-323324) Calling .Close
	I1013 21:19:00.218403   20588 main.go:141] libmachine: Successfully made call to close driver server
	I1013 21:19:00.218421   20588 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 21:19:00.218429   20588 main.go:141] libmachine: Making call to close driver server
	I1013 21:19:00.218437   20588 main.go:141] libmachine: (addons-323324) Calling .Close
	I1013 21:19:00.218657   20588 main.go:141] libmachine: Successfully made call to close driver server
	I1013 21:19:00.218666   20588 main.go:141] libmachine: (addons-323324) DBG | Closing plugin on server side
	I1013 21:19:00.218673   20588 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 21:19:00.225069   20588 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1013 21:19:00.225090   20588 node_conditions.go:123] node cpu capacity is 2
	I1013 21:19:00.225100   20588 node_conditions.go:105] duration metric: took 7.068098ms to run NodePressure ...
	I1013 21:19:00.225111   20588 start.go:241] waiting for startup goroutines ...
	I1013 21:19:00.515246   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:00.518360   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:00.564445   20588 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.980735735s)
	I1013 21:19:00.564499   20588 main.go:141] libmachine: Making call to close driver server
	I1013 21:19:00.564516   20588 main.go:141] libmachine: (addons-323324) Calling .Close
	I1013 21:19:00.564783   20588 main.go:141] libmachine: Successfully made call to close driver server
	I1013 21:19:00.564798   20588 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 21:19:00.564806   20588 main.go:141] libmachine: Making call to close driver server
	I1013 21:19:00.564812   20588 main.go:141] libmachine: (addons-323324) Calling .Close
	I1013 21:19:00.565012   20588 main.go:141] libmachine: (addons-323324) DBG | Closing plugin on server side
	I1013 21:19:00.565082   20588 main.go:141] libmachine: Successfully made call to close driver server
	I1013 21:19:00.565103   20588 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 21:19:00.566259   20588 addons.go:479] Verifying addon gcp-auth=true in "addons-323324"
	I1013 21:19:00.567992   20588 out.go:179] * Verifying gcp-auth addon...
	I1013 21:19:00.570226   20588 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1013 21:19:00.577921   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:00.616033   20588 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1013 21:19:00.616056   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:00.649298   20588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:19:00.980433   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:00.984735   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:01.006762   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:01.083500   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:01.486498   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:01.487614   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:01.504803   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:01.574959   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:01.983098   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:01.984376   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:02.003805   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:02.083342   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:02.452372   20588 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.80302772s)
	W1013 21:19:02.452419   20588 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:02.452445   20588 retry.go:31] will retry after 390.193812ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:02.488326   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:02.488502   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:02.509101   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:02.588153   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:02.843434   20588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:19:02.984691   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:02.985030   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:03.005272   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:03.081320   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:03.482491   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:03.482802   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:03.503839   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:03.576846   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:03.982215   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:03.984781   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:04.004250   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:04.074360   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:04.200280   20588 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.356800493s)
	W1013 21:19:04.200329   20588 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:04.200353   20588 retry.go:31] will retry after 789.727023ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:04.487968   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:04.488665   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:04.504811   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:04.576560   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:04.982186   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:04.985259   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:04.990258   20588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:19:05.004328   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:05.075769   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:05.481227   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:05.481530   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:05.509523   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:05.573798   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:05.980956   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:05.984192   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:06.002626   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:06.033203   20588 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.042864533s)
	W1013 21:19:06.033251   20588 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:06.033274   20588 retry.go:31] will retry after 1.388582121s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:06.074195   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:06.483906   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:06.484290   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:06.504351   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:06.575806   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:06.982675   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:06.987407   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:07.004217   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:07.074964   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:07.422287   20588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:19:07.481967   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:07.484448   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:07.505634   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:07.576175   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:07.984345   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:07.990368   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:08.002544   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:08.076361   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:08.482497   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:08.482707   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:08.507455   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:08.576745   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:08.604537   20588 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.182211068s)
	W1013 21:19:08.604574   20588 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:08.604590   20588 retry.go:31] will retry after 2.778208241s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:08.985690   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:08.988364   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:09.007856   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:09.074900   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:09.480212   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:09.481959   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:09.505115   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:09.576382   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:09.997226   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:09.997257   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:10.007337   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:10.074801   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:10.482181   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:10.482325   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:10.502403   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:10.576905   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:10.984371   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:10.985642   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:11.003709   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:11.081418   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:11.383850   20588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:19:11.484970   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:11.486606   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:11.505864   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:11.739839   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:11.989355   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:11.989381   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:12.005789   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:12.075149   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:12.479946   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:12.483859   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:12.507355   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:12.516580   20588 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.132684492s)
	W1013 21:19:12.516623   20588 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:12.516646   20588 retry.go:31] will retry after 3.368305021s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:12.575925   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:13.168546   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:13.175345   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:13.176879   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:13.178573   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:13.481651   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:13.482814   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:13.504069   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:13.574902   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:13.984046   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:13.985531   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:14.004789   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:14.073536   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:14.479151   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:14.481336   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:14.506108   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:14.574876   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:14.978075   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:14.980456   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:15.006691   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:15.073527   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:15.481958   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:15.482012   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:15.504762   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:15.573923   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:15.886107   20588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:19:15.982840   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:15.983781   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:16.005677   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:16.074662   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:16.481017   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:16.485854   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:16.509070   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:16.574075   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:16.983557   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:16.983595   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:17.000349   20588 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.114204928s)
	W1013 21:19:17.000393   20588 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:17.000414   20588 retry.go:31] will retry after 3.808384787s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:17.003457   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:17.073821   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:17.480050   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:17.481326   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:17.506435   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:17.574779   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:17.978830   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:17.979380   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:18.003689   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:18.073706   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:18.481996   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:18.482672   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:18.504462   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:18.578112   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:18.982046   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:18.985861   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:19.005411   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:19.074728   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:19.480713   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:19.481608   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:19.580499   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:19.583327   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:19.980705   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:19.981255   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:20.002862   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:20.074738   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:20.480144   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:20.480302   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:20.502612   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:20.575362   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:20.809692   20588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:19:20.981314   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:20.982559   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:21.004428   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:21.073921   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:21.488102   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:21.488359   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:21.508431   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:21.574294   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:21.980910   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:21.983270   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:22.002702   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:22.042923   20588 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.233189066s)
	W1013 21:19:22.042979   20588 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:22.043000   20588 retry.go:31] will retry after 6.590334255s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:22.074469   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:22.484843   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:22.485021   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:22.506249   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:22.573946   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:22.979702   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:22.982019   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:23.005861   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:23.076080   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:23.481073   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:23.481355   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:23.503921   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:23.574116   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:23.979841   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:23.981756   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:24.004423   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:24.073444   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:24.482645   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:24.486665   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:24.504897   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:24.574923   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:24.981134   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:24.981437   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:25.003137   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:25.073704   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:25.481492   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:25.482106   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:25.505497   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:25.581601   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:25.985126   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:25.985363   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:26.006118   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:26.077582   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:26.482829   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:26.483542   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:26.502493   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:26.574282   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:26.981073   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:26.981937   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:27.024703   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:27.078124   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:27.483410   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:27.486123   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:27.507301   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:27.580795   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:27.985972   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:27.987745   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:28.008299   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:28.075839   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:28.481002   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:28.484074   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:28.504088   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:28.574696   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:28.633871   20588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:19:28.981692   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:28.981753   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:29.003068   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:29.074191   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:29.480087   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:29.480091   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:29.504328   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:29.574732   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:29.720355   20588 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.086440898s)
	W1013 21:19:29.720403   20588 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:29.720420   20588 retry.go:31] will retry after 6.494468729s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:29.981958   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:29.982573   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:30.006128   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:30.076059   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:30.561911   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:30.562531   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:30.564667   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:30.575059   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:30.984999   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:30.986382   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:31.010877   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:31.079088   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:31.479393   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:31.481538   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:31.651264   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:31.652337   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:31.984139   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:31.984269   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:32.004477   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:32.075781   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:32.533608   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:32.533743   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:32.535165   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:32.675573   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:32.983521   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:32.984074   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:33.006552   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:33.075082   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:33.481254   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:33.483442   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:33.508339   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:33.575580   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:33.981069   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:33.981690   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:34.004019   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:34.076244   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:34.484281   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:34.486573   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:34.506190   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:34.574464   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:34.982583   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:34.986039   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:35.005591   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:35.075526   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:35.481262   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:35.481530   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:35.506261   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:35.576012   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:35.985723   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:35.987908   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:36.010235   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:36.075525   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:36.215775   20588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:19:36.481091   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:36.481433   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:36.583178   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:36.584040   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:36.981469   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:36.984220   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:37.007265   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:37.077098   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:37.481493   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:37.483610   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:37.505072   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:37.577214   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:37.822346   20588 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.606525659s)
	W1013 21:19:37.822390   20588 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:37.822412   20588 retry.go:31] will retry after 9.990036161s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:37.988544   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:37.988586   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:38.006452   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:38.082361   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:38.484064   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:38.486971   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:38.584141   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:38.586146   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:38.990852   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:38.991584   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:39.008651   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:39.074796   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:39.483131   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:39.484752   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:39.505527   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:39.575242   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:39.982080   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:39.983074   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:40.005845   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:40.075777   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:40.478986   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:40.480947   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:40.505741   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:40.577729   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:40.982384   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:40.982493   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:41.002692   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:41.073976   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:41.481058   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:41.482083   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:41.518684   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:41.578971   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:42.106419   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:42.106459   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:42.106788   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:42.107065   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:42.478364   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:42.482833   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:42.503424   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:42.576425   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:42.980930   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:42.981721   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:43.003347   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:43.074616   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:43.480670   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:43.491187   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:43.505297   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:43.573831   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:43.981451   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:43.986498   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:44.003737   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:44.073919   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:44.481874   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:44.483989   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:44.504277   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:44.574588   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:44.979842   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:44.982897   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:45.004180   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:45.075137   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:45.485722   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:45.485985   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:45.506904   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:45.576022   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:45.981461   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:45.981683   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:46.004383   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:46.076737   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:46.482136   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:46.485922   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:46.508974   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:46.582585   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:46.986069   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:46.992487   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:47.004746   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:47.074466   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:47.482222   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:47.483090   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:47.503258   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:47.578516   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:47.812809   20588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:19:47.995725   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:47.995874   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:48.011929   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:48.076031   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:48.482349   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:48.483237   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:48.505080   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:48.575402   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:48.952597   20588 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.139746225s)
	W1013 21:19:48.952645   20588 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:48.952671   20588 retry.go:31] will retry after 11.658446543s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:19:48.983099   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:48.983311   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:49.083921   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:49.084456   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:49.479236   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:49.479415   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:49.502996   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:49.574255   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:49.979099   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:49.980334   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:50.002971   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:50.073931   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:50.483370   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:50.485297   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:50.508124   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:50.576259   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:50.983278   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:50.983354   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:51.006514   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:51.075481   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:51.481990   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:51.482233   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:51.504318   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:51.576250   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:51.983299   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:51.986631   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:52.004851   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:52.074325   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:52.481065   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:19:52.481579   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:52.504379   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:52.573307   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:52.983358   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:52.983454   20588 kapi.go:107] duration metric: took 57.007513699s to wait for kubernetes.io/minikube-addons=registry ...
	I1013 21:19:53.005039   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:53.075978   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:53.479064   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:53.502402   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:53.573742   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:53.979995   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:54.004226   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:54.074973   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:54.481061   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:54.504532   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:54.574177   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:54.981960   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:55.008116   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:55.078874   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:55.483955   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:55.505335   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:55.575480   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:55.981320   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:56.005752   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:56.079602   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:56.480061   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:56.504102   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:56.576391   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:56.985961   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:57.005183   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:57.084317   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:57.481704   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:57.506028   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:57.574122   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:57.981054   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:58.082744   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:58.083168   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:58.481564   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:58.506235   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:58.577430   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:58.980359   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:59.002830   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:59.074701   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:59.484668   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:19:59.508487   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:19:59.587820   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:19:59.980869   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:00.006406   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:00.082313   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:00.482708   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:00.516731   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:00.574025   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:00.612314   20588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:20:00.990796   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:01.014675   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:01.100959   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:01.484346   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:01.505220   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:01.576861   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:01.980915   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:02.006311   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:02.084262   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:02.450673   20588 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.838320442s)
	W1013 21:20:02.450712   20588 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:20:02.450729   20588 retry.go:31] will retry after 17.591254671s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:20:02.484524   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:02.507349   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:02.576214   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:02.979370   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:03.004926   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:03.077823   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:03.480999   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:03.503212   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:03.575675   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:03.982204   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:04.005516   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:04.072995   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:04.478524   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:04.508531   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:04.576751   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:04.978965   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:05.003446   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:05.074587   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:05.585060   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:05.586251   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:05.586603   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:05.979822   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:06.005576   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:06.074381   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:06.480208   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:06.502896   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:06.578754   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:06.979733   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:07.005963   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:07.077852   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:07.480386   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:07.503065   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:07.574553   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:07.979861   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:08.005149   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:08.076175   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:08.487420   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:08.504937   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:08.575316   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:08.982021   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:09.004102   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:09.074053   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:09.480333   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:09.509183   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:09.579563   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:09.982881   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:10.003729   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:10.074601   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:10.480140   20588 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:20:10.505602   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:10.586627   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:10.979828   20588 kapi.go:107] duration metric: took 1m15.004749888s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1013 21:20:11.024327   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:11.122624   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:11.503693   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:11.603628   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:12.003816   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:12.073709   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:12.510468   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:12.574418   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:13.006472   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:13.074408   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:13.504417   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:13.575683   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:14.004776   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:14.075457   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:14.506283   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:14.575643   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:20:15.010078   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:15.109083   20588 kapi.go:107] duration metric: took 1m14.538851453s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1013 21:20:15.110792   20588 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-323324 cluster.
	I1013 21:20:15.112241   20588 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1013 21:20:15.113999   20588 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1013 21:20:15.502805   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:16.006334   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:16.507591   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:17.004919   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:17.506961   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:18.003314   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:18.570650   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:19.005268   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:19.503068   20588 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:20:20.004579   20588 kapi.go:107] duration metric: took 1m22.005552765s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1013 21:20:20.042677   20588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1013 21:20:20.827278   20588 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:20:20.827316   20588 retry.go:31] will retry after 26.084633534s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:20:46.912842   20588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1013 21:20:47.620754   20588 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:20:47.620831   20588 main.go:141] libmachine: Making call to close driver server
	I1013 21:20:47.620846   20588 main.go:141] libmachine: (addons-323324) Calling .Close
	I1013 21:20:47.621187   20588 main.go:141] libmachine: Successfully made call to close driver server
	I1013 21:20:47.621206   20588 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 21:20:47.621217   20588 main.go:141] libmachine: Making call to close driver server
	I1013 21:20:47.621220   20588 main.go:141] libmachine: (addons-323324) DBG | Closing plugin on server side
	I1013 21:20:47.621223   20588 main.go:141] libmachine: (addons-323324) Calling .Close
	I1013 21:20:47.621547   20588 main.go:141] libmachine: (addons-323324) DBG | Closing plugin on server side
	I1013 21:20:47.621608   20588 main.go:141] libmachine: Successfully made call to close driver server
	I1013 21:20:47.621622   20588 main.go:141] libmachine: Making call to close connection to plugin binary
	W1013 21:20:47.621705   20588 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1013 21:20:47.623470   20588 out.go:179] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, cloud-spanner, registry-creds, ingress-dns, storage-provisioner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1013 21:20:47.624887   20588 addons.go:514] duration metric: took 2m1.259080288s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin cloud-spanner registry-creds ingress-dns storage-provisioner metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1013 21:20:47.624933   20588 start.go:246] waiting for cluster config update ...
	I1013 21:20:47.624955   20588 start.go:255] writing updated cluster config ...
	I1013 21:20:47.625258   20588 ssh_runner.go:195] Run: rm -f paused
	I1013 21:20:47.633658   20588 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 21:20:47.637968   20588 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pwpxp" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:20:47.643618   20588 pod_ready.go:94] pod "coredns-66bc5c9577-pwpxp" is "Ready"
	I1013 21:20:47.643635   20588 pod_ready.go:86] duration metric: took 5.64937ms for pod "coredns-66bc5c9577-pwpxp" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:20:47.645881   20588 pod_ready.go:83] waiting for pod "etcd-addons-323324" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:20:47.650130   20588 pod_ready.go:94] pod "etcd-addons-323324" is "Ready"
	I1013 21:20:47.650146   20588 pod_ready.go:86] duration metric: took 4.241321ms for pod "etcd-addons-323324" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:20:47.652551   20588 pod_ready.go:83] waiting for pod "kube-apiserver-addons-323324" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:20:47.657388   20588 pod_ready.go:94] pod "kube-apiserver-addons-323324" is "Ready"
	I1013 21:20:47.657413   20588 pod_ready.go:86] duration metric: took 4.845069ms for pod "kube-apiserver-addons-323324" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:20:47.659655   20588 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-323324" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:20:48.038677   20588 pod_ready.go:94] pod "kube-controller-manager-addons-323324" is "Ready"
	I1013 21:20:48.038711   20588 pod_ready.go:86] duration metric: took 379.035331ms for pod "kube-controller-manager-addons-323324" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:20:48.238359   20588 pod_ready.go:83] waiting for pod "kube-proxy-gpl4b" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:20:48.638848   20588 pod_ready.go:94] pod "kube-proxy-gpl4b" is "Ready"
	I1013 21:20:48.638879   20588 pod_ready.go:86] duration metric: took 400.491838ms for pod "kube-proxy-gpl4b" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:20:48.839901   20588 pod_ready.go:83] waiting for pod "kube-scheduler-addons-323324" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:20:49.238333   20588 pod_ready.go:94] pod "kube-scheduler-addons-323324" is "Ready"
	I1013 21:20:49.238365   20588 pod_ready.go:86] duration metric: took 398.429506ms for pod "kube-scheduler-addons-323324" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:20:49.238378   20588 pod_ready.go:40] duration metric: took 1.604689961s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 21:20:49.282404   20588 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1013 21:20:49.284220   20588 out.go:179] * Done! kubectl is now configured to use "addons-323324" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 13 21:23:55 addons-323324 crio[824]: time="2025-10-13 21:23:55.852074786Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ff062429-fe1a-4ad7-bc3f-7a352c9f5b37 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:23:55 addons-323324 crio[824]: time="2025-10-13 21:23:55.852230605Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ff062429-fe1a-4ad7-bc3f-7a352c9f5b37 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:23:55 addons-323324 crio[824]: time="2025-10-13 21:23:55.852942487Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cc13bc4c53a221ba47836b1f7890d124a9e3453355a12a074618d44078a10a8d,PodSandboxId:8d56fc4137e10814e8771004035df22f0cc4e7be93a166bbc228ce89160d4332,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760390493122856558,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 485bbbff-1382-46c5-a272-230368cf2188,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b568cf655451b264e5173463f9d954f62322da00b6ec1d2f0983f00f8bf4ff77,PodSandboxId:75f1c9d13f880de74b69c95dd0dc85bf0f6f4c8136d18aaf71ecc925b8a07ad7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760390452769038389,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30611480-4cab-4670-840b-c6b0d2f9f7ea,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f135c378870cc773e2d6be740a4ed99412b059798f3cc4e0eb9695c3a64cf4f,PodSandboxId:bab48738f5ef6ad1fdbbce4e4d52c57be994069594e49fa7f6141d8ab7d6f11a,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760390409409735879,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-9tfj8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: debda1e1-b091-49e4-9f1d-cbee4e609185,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:7b9f0a791d3634879f81f9e05f8d9ebbc3ff62c55941641aa751cf0c509afb48,PodSandboxId:9210d9b94a3a370f7cce9fc2393fbf923ead8c32e57175982209cc1f7ccf786e,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Sta
te:CONTAINER_EXITED,CreatedAt:1760390399424374407,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-4qjrx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f6ef6b1b-287a-4d95-8bf5-2f7256fa0ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:977683233a322dba4e82bea7894f8ff64be195e681a2321ad2eed43396bb6076,PodSandboxId:f5c2f5f0a61f0d855daeb281702f6bc5d86cb2d13f22912519e9c147501b4109,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa939
17a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760390388826309846,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8hqf6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e07978c9-f3cb-4feb-9056-863fc33ba1d6,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66322053750ecc589b17b548f50fac7ec9f58fc1277c33c2d80c79b47a3caea2,PodSandboxId:fe49484fece50915364a4dd182798c1fc7559a50d3b4d573df3faa11264266d2,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38d
ca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760390382762766748,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-w7k84,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 80cdcf0d-89ad-4fec-bb90-68a707dc90c4,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:053001a2131fc69c7d45fe07f197237d29ee63e68e220b2e8e8b40ea08f80ae3,PodSandboxId:21016e9c7c9fd836a00fd2b90eb9469dc41968ef996ac67b0255519b930aedfc,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e
4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1760390378486288308,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-dtw59,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 2f094d4e-ffef-419e-9457-c1cdd95a8dd2,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2d010ef97a0eedea5805bd3ae2502946948809e47b312a597a42296d42a9902,PodSandboxId:30a0b2b2e931ae56298dc82c17b89d4005086444d286e47a329b8dcd15655505,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase
/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760390374093958390,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 717463b5-c304-408e-a154-1901a00a3c52,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4886e4df955463c3857d14bacf52642ac4532fd6160f3b1ecba1bb73dfa08140,PodSandboxId:ebc74e6068dd5d8af234d89482a06d6288d6
d22fee03a298ba22e0233233e9e3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760390336466976384,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8c4eb07-bd1b-424f-b704-fe3c84d248bf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c2d5997b01b432d5249a78cdd4e1d49bf76c21db629914ea11ede7289725416,PodSandboxId:fb8231bb4991925b8ffa1534bbaa8624ed765a0ba3db29dd
483520165cad83ae,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760390336127206529,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-8jt96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 706242ca-d40e-473a-a4e2-1a246383bdee,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de69a3e4003d9a5346bd311f0936e2dde0c744383ac2d378283318e982bf81cb,PodSandboxId:2ba1e05e
b877db6e953e558955c002eb59128b6597609db77d4dc42e12906afe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760390328429589527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pwpxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bcc5c7d-ab40-4c59-93d1-c7f45aa62b3a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\
":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3425a4d65e31eaca6b66edf83b549969ab54ccaefaeb9bf5dac23e56c7bf4add,PodSandboxId:9e404262bd5b154b5baff560d2ec1ffbe9f45a904c5cf341060f6f48e4d67f08,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760390327736858628,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gpl4b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2dab544-4519-453a-bc2e-b1a5738a7f90,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.
kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25f8730d4a0423299ed168fe403997aa9d10e446e5138fe38c585761bef40ef6,PodSandboxId:87e269ea6eea386b22ca1be6e36a53f8773e9eabc1c55f0923a0c58ce4bc671f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760390315776840569,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-323324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00d33771df8cd787e9b43bfd79a0deca,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.contai
ner.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cceabe1eeef125dd69d3df210fc4f4dec5cf77549e55b7eea52f4d468efe88c0,PodSandboxId:bdb0876aa8157989e05ec4baf030f78223a31c0bc5dc922b1942e12a1e75c6d2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760390315796525928,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-323324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6023247c17bdb7942c57bf3f3cc3ebd,},Annotations:m
ap[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a9888724e317aa48ab13fcd80d2fcd3a8731bf3e687660015e3312435c56a68,PodSandboxId:233cfcfa704305636e06163d089f34f97e8360e526b722e2fe3c458fd36da082,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760390315748026704,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-addons-323324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b474ab11f9ed0cfacbe0915f53fc096,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8da1c7a0ebf7decf526003d063f40efab7547f1a7c6d6649ff7da760b62ec6d5,PodSandboxId:ff343940be2f6d7e50904a570b57a697abecc07a243c504f379d5387730ceaa1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:176039031574578231
4,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-323324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b570da59468f290fd78625667929fcd4,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ff062429-fe1a-4ad7-bc3f-7a352c9f5b37 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:23:55 addons-323324 conmon[13930]: conmon 3fb3952d5a779c4827f6 <ndebug>: container PID: 13942
	Oct 13 21:23:55 addons-323324 crio[824]: time="2025-10-13 21:23:55.868345197Z" level=debug msg="Received container pid: 13942" file="oci/runtime_oci.go:284" id=569b4182-7d40-4347-82ad-bf45a082dc8a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 21:23:55 addons-323324 crio[824]: time="2025-10-13 21:23:55.884767304Z" level=info msg="Created container 3fb3952d5a779c4827f65495bfac1f070a64e0e8850f719d509f34a79649bcbb: default/hello-world-app-5d498dc89-mdhnd/hello-world-app" file="server/container_create.go:491" id=569b4182-7d40-4347-82ad-bf45a082dc8a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 21:23:55 addons-323324 crio[824]: time="2025-10-13 21:23:55.884988976Z" level=debug msg="Response: &CreateContainerResponse{ContainerId:3fb3952d5a779c4827f65495bfac1f070a64e0e8850f719d509f34a79649bcbb,}" file="otel-collector/interceptors.go:74" id=569b4182-7d40-4347-82ad-bf45a082dc8a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 21:23:55 addons-323324 crio[824]: time="2025-10-13 21:23:55.886527951Z" level=debug msg="Request: &StartContainerRequest{ContainerId:3fb3952d5a779c4827f65495bfac1f070a64e0e8850f719d509f34a79649bcbb,}" file="otel-collector/interceptors.go:62" id=956b3010-87cf-41f2-97c6-dea9990458a0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 21:23:55 addons-323324 crio[824]: time="2025-10-13 21:23:55.886780852Z" level=info msg="Starting container: 3fb3952d5a779c4827f65495bfac1f070a64e0e8850f719d509f34a79649bcbb" file="server/container_start.go:21" id=956b3010-87cf-41f2-97c6-dea9990458a0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 21:23:55 addons-323324 crio[824]: time="2025-10-13 21:23:55.905992082Z" level=info msg="Started container" PID=13942 containerID=3fb3952d5a779c4827f65495bfac1f070a64e0e8850f719d509f34a79649bcbb description=default/hello-world-app-5d498dc89-mdhnd/hello-world-app file="server/container_start.go:115" id=956b3010-87cf-41f2-97c6-dea9990458a0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f82e0925e87ff21f356e7f1207ae86984752a4969a103f0d786b2fc8abd07286
	Oct 13 21:23:55 addons-323324 crio[824]: time="2025-10-13 21:23:55.906328596Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ac204084-1e01-46ba-8e0c-5656bc6d1d4e name=/runtime.v1.RuntimeService/Version
	Oct 13 21:23:55 addons-323324 crio[824]: time="2025-10-13 21:23:55.906441761Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ac204084-1e01-46ba-8e0c-5656bc6d1d4e name=/runtime.v1.RuntimeService/Version
	Oct 13 21:23:55 addons-323324 crio[824]: time="2025-10-13 21:23:55.909240721Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2a27ee29-dec0-4953-85ec-be61139f2819 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 13 21:23:55 addons-323324 crio[824]: time="2025-10-13 21:23:55.910906904Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760390635910817030,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606631,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2a27ee29-dec0-4953-85ec-be61139f2819 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 13 21:23:55 addons-323324 crio[824]: time="2025-10-13 21:23:55.911573335Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5b823102-d5b2-485e-838c-5091f46e104e name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:23:55 addons-323324 crio[824]: time="2025-10-13 21:23:55.911646758Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5b823102-d5b2-485e-838c-5091f46e104e name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:23:55 addons-323324 crio[824]: time="2025-10-13 21:23:55.912063232Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3fb3952d5a779c4827f65495bfac1f070a64e0e8850f719d509f34a79649bcbb,PodSandboxId:f82e0925e87ff21f356e7f1207ae86984752a4969a103f0d786b2fc8abd07286,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_CREATED,CreatedAt:1760390635805059019,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d498dc89-mdhnd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc7056db-e3b1-41cf-8f34-2c716f5b686c,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc13bc4c53a221ba47836b1f7890d124a9e3453355a12a074618d44078a10a8d,PodSandboxId:8d56fc4137e10814e8771004035df22f0cc4e7be93a166bbc228ce89160d4332,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760390493122856558,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 485bbbff-1382-46c5-a272-230368cf2188,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b568cf655451b264e5173463f9d954f62322da00b6ec1d2f0983f00f8bf4ff77,PodSandboxId:75f1c9d13f880de74b69c95dd0dc85bf0f6f4c8136d18aaf71ecc925b8a07ad7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760390452769038389,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30611480-4cab-4670-84
0b-c6b0d2f9f7ea,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f135c378870cc773e2d6be740a4ed99412b059798f3cc4e0eb9695c3a64cf4f,PodSandboxId:bab48738f5ef6ad1fdbbce4e4d52c57be994069594e49fa7f6141d8ab7d6f11a,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760390409409735879,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-9tfj8,io.kubernetes.pod.namespace: ingress-nginx,io
.kubernetes.pod.uid: debda1e1-b091-49e4-9f1d-cbee4e609185,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:7b9f0a791d3634879f81f9e05f8d9ebbc3ff62c55941641aa751cf0c509afb48,PodSandboxId:9210d9b94a3a370f7cce9fc2393fbf923ead8c32e57175982209cc1f7ccf786e,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifi
edImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760390399424374407,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-4qjrx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f6ef6b1b-287a-4d95-8bf5-2f7256fa0ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:977683233a322dba4e82bea7894f8ff64be195e681a2321ad2eed43396bb6076,PodSandboxId:f5c2f5f0a61f0d855daeb281702f6bc5d86cb2d13f22912519e9c147501b4109,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotat
ions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760390388826309846,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8hqf6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e07978c9-f3cb-4feb-9056-863fc33ba1d6,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66322053750ecc589b17b548f50fac7ec9f58fc1277c33c2d80c79b47a3caea2,PodSandboxId:fe49484fece50915364a4dd182798c1fc7559a50d3b4d573df3faa11264266d2,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a
76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760390382762766748,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-w7k84,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 80cdcf0d-89ad-4fec-bb90-68a707dc90c4,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:053001a2131fc69c7d45fe07f197237d29ee63e68e220b2e8e8b40ea08f80ae3,PodSandboxId:21016e9c7c9fd836a00fd2b90eb9469dc41968ef996ac67b0255519b930aedfc,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Ima
ge:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1760390378486288308,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-dtw59,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 2f094d4e-ffef-419e-9457-c1cdd95a8dd2,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2d010ef97a0eedea5805bd3ae2502946948809e47b312a597a42296d42a9902,PodSandboxId:30a0b2b2e931ae56298dc82c17b89d4005086444d286e47a329b8dcd15655505,Me
tadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760390374093958390,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 717463b5-c304-408e-a154-1901a00a3c52,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4886e4df95
5463c3857d14bacf52642ac4532fd6160f3b1ecba1bb73dfa08140,PodSandboxId:ebc74e6068dd5d8af234d89482a06d6288d6d22fee03a298ba22e0233233e9e3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760390336466976384,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8c4eb07-bd1b-424f-b704-fe3c84d248bf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c2d5997b01b432d5249a7
8cdd4e1d49bf76c21db629914ea11ede7289725416,PodSandboxId:fb8231bb4991925b8ffa1534bbaa8624ed765a0ba3db29dd483520165cad83ae,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760390336127206529,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-8jt96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 706242ca-d40e-473a-a4e2-1a246383bdee,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,
},},&Container{Id:de69a3e4003d9a5346bd311f0936e2dde0c744383ac2d378283318e982bf81cb,PodSandboxId:2ba1e05eb877db6e953e558955c002eb59128b6597609db77d4dc42e12906afe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760390328429589527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pwpxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bcc5c7d-ab40-4c59-93d1-c7f45aa62b3a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"live
ness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3425a4d65e31eaca6b66edf83b549969ab54ccaefaeb9bf5dac23e56c7bf4add,PodSandboxId:9e404262bd5b154b5baff560d2ec1ffbe9f45a904c5cf341060f6f48e4d67f08,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760390327736858628,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gpl4b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2dab54
4-4519-453a-bc2e-b1a5738a7f90,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25f8730d4a0423299ed168fe403997aa9d10e446e5138fe38c585761bef40ef6,PodSandboxId:87e269ea6eea386b22ca1be6e36a53f8773e9eabc1c55f0923a0c58ce4bc671f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760390315776840569,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-323324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00d33771df8cd787e9b4
3bfd79a0deca,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cceabe1eeef125dd69d3df210fc4f4dec5cf77549e55b7eea52f4d468efe88c0,PodSandboxId:bdb0876aa8157989e05ec4baf030f78223a31c0bc5dc922b1942e12a1e75c6d2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760390315796525928,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-323324,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6023247c17bdb7942c57bf3f3cc3ebd,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a9888724e317aa48ab13fcd80d2fcd3a8731bf3e687660015e3312435c56a68,PodSandboxId:233cfcfa704305636e06163d089f34f97e8360e526b722e2fe3c458fd36da082,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760390315748026704,Labels:map[string]
string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-323324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b474ab11f9ed0cfacbe0915f53fc096,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8da1c7a0ebf7decf526003d063f40efab7547f1a7c6d6649ff7da760b62ec6d5,PodSandboxId:ff343940be2f6d7e50904a570b57a697abecc07a243c504f379d5387730ceaa1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc696102
4917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760390315745782314,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-323324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b570da59468f290fd78625667929fcd4,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5b823102-d5b2-485e-838c-5091f46e104e name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:23:55 addons-323324 crio[824]: time="2025-10-13 21:23:55.923899030Z" level=debug msg="Response: &StartContainerResponse{}" file="otel-collector/interceptors.go:74" id=956b3010-87cf-41f2-97c6-dea9990458a0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 21:23:55 addons-323324 crio[824]: time="2025-10-13 21:23:55.956636449Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8933b7a1-249c-461b-b75f-c5736697e51c name=/runtime.v1.RuntimeService/Version
	Oct 13 21:23:55 addons-323324 crio[824]: time="2025-10-13 21:23:55.956772477Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8933b7a1-249c-461b-b75f-c5736697e51c name=/runtime.v1.RuntimeService/Version
	Oct 13 21:23:55 addons-323324 crio[824]: time="2025-10-13 21:23:55.958117092Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=60243a42-bc95-413f-826d-a3d634a7fd31 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 13 21:23:55 addons-323324 crio[824]: time="2025-10-13 21:23:55.960285226Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760390635960257945,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606631,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=60243a42-bc95-413f-826d-a3d634a7fd31 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 13 21:23:55 addons-323324 crio[824]: time="2025-10-13 21:23:55.961747042Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=803d44ee-d54c-41a9-84f8-a65a6111bea9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:23:55 addons-323324 crio[824]: time="2025-10-13 21:23:55.962038112Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=803d44ee-d54c-41a9-84f8-a65a6111bea9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:23:55 addons-323324 crio[824]: time="2025-10-13 21:23:55.963197487Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3fb3952d5a779c4827f65495bfac1f070a64e0e8850f719d509f34a79649bcbb,PodSandboxId:f82e0925e87ff21f356e7f1207ae86984752a4969a103f0d786b2fc8abd07286,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1760390635805059019,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d498dc89-mdhnd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc7056db-e3b1-41cf-8f34-2c716f5b686c,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc13bc4c53a221ba47836b1f7890d124a9e3453355a12a074618d44078a10a8d,PodSandboxId:8d56fc4137e10814e8771004035df22f0cc4e7be93a166bbc228ce89160d4332,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760390493122856558,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 485bbbff-1382-46c5-a272-230368cf2188,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b568cf655451b264e5173463f9d954f62322da00b6ec1d2f0983f00f8bf4ff77,PodSandboxId:75f1c9d13f880de74b69c95dd0dc85bf0f6f4c8136d18aaf71ecc925b8a07ad7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760390452769038389,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30611480-4cab-4670-84
0b-c6b0d2f9f7ea,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f135c378870cc773e2d6be740a4ed99412b059798f3cc4e0eb9695c3a64cf4f,PodSandboxId:bab48738f5ef6ad1fdbbce4e4d52c57be994069594e49fa7f6141d8ab7d6f11a,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760390409409735879,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-9tfj8,io.kubernetes.pod.namespace: ingress-nginx,io
.kubernetes.pod.uid: debda1e1-b091-49e4-9f1d-cbee4e609185,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:7b9f0a791d3634879f81f9e05f8d9ebbc3ff62c55941641aa751cf0c509afb48,PodSandboxId:9210d9b94a3a370f7cce9fc2393fbf923ead8c32e57175982209cc1f7ccf786e,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifi
edImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760390399424374407,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-4qjrx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f6ef6b1b-287a-4d95-8bf5-2f7256fa0ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:977683233a322dba4e82bea7894f8ff64be195e681a2321ad2eed43396bb6076,PodSandboxId:f5c2f5f0a61f0d855daeb281702f6bc5d86cb2d13f22912519e9c147501b4109,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotat
ions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760390388826309846,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8hqf6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e07978c9-f3cb-4feb-9056-863fc33ba1d6,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66322053750ecc589b17b548f50fac7ec9f58fc1277c33c2d80c79b47a3caea2,PodSandboxId:fe49484fece50915364a4dd182798c1fc7559a50d3b4d573df3faa11264266d2,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a
76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760390382762766748,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-w7k84,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 80cdcf0d-89ad-4fec-bb90-68a707dc90c4,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:053001a2131fc69c7d45fe07f197237d29ee63e68e220b2e8e8b40ea08f80ae3,PodSandboxId:21016e9c7c9fd836a00fd2b90eb9469dc41968ef996ac67b0255519b930aedfc,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Ima
ge:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1760390378486288308,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-dtw59,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 2f094d4e-ffef-419e-9457-c1cdd95a8dd2,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2d010ef97a0eedea5805bd3ae2502946948809e47b312a597a42296d42a9902,PodSandboxId:30a0b2b2e931ae56298dc82c17b89d4005086444d286e47a329b8dcd15655505,Me
tadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760390374093958390,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 717463b5-c304-408e-a154-1901a00a3c52,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4886e4df95
5463c3857d14bacf52642ac4532fd6160f3b1ecba1bb73dfa08140,PodSandboxId:ebc74e6068dd5d8af234d89482a06d6288d6d22fee03a298ba22e0233233e9e3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760390336466976384,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8c4eb07-bd1b-424f-b704-fe3c84d248bf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c2d5997b01b432d5249a7
8cdd4e1d49bf76c21db629914ea11ede7289725416,PodSandboxId:fb8231bb4991925b8ffa1534bbaa8624ed765a0ba3db29dd483520165cad83ae,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760390336127206529,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-8jt96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 706242ca-d40e-473a-a4e2-1a246383bdee,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,
},},&Container{Id:de69a3e4003d9a5346bd311f0936e2dde0c744383ac2d378283318e982bf81cb,PodSandboxId:2ba1e05eb877db6e953e558955c002eb59128b6597609db77d4dc42e12906afe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760390328429589527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pwpxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bcc5c7d-ab40-4c59-93d1-c7f45aa62b3a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"live
ness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3425a4d65e31eaca6b66edf83b549969ab54ccaefaeb9bf5dac23e56c7bf4add,PodSandboxId:9e404262bd5b154b5baff560d2ec1ffbe9f45a904c5cf341060f6f48e4d67f08,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760390327736858628,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gpl4b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2dab54
4-4519-453a-bc2e-b1a5738a7f90,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25f8730d4a0423299ed168fe403997aa9d10e446e5138fe38c585761bef40ef6,PodSandboxId:87e269ea6eea386b22ca1be6e36a53f8773e9eabc1c55f0923a0c58ce4bc671f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760390315776840569,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-323324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00d33771df8cd787e9b4
3bfd79a0deca,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cceabe1eeef125dd69d3df210fc4f4dec5cf77549e55b7eea52f4d468efe88c0,PodSandboxId:bdb0876aa8157989e05ec4baf030f78223a31c0bc5dc922b1942e12a1e75c6d2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760390315796525928,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-323324,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6023247c17bdb7942c57bf3f3cc3ebd,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a9888724e317aa48ab13fcd80d2fcd3a8731bf3e687660015e3312435c56a68,PodSandboxId:233cfcfa704305636e06163d089f34f97e8360e526b722e2fe3c458fd36da082,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760390315748026704,Labels:map[string]
string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-323324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b474ab11f9ed0cfacbe0915f53fc096,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8da1c7a0ebf7decf526003d063f40efab7547f1a7c6d6649ff7da760b62ec6d5,PodSandboxId:ff343940be2f6d7e50904a570b57a697abecc07a243c504f379d5387730ceaa1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc696102
4917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760390315745782314,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-323324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b570da59468f290fd78625667929fcd4,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=803d44ee-d54c-41a9-84f8-a65a6111bea9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	3fb3952d5a779       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   f82e0925e87ff       hello-world-app-5d498dc89-mdhnd
	cc13bc4c53a22       docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22                              2 minutes ago            Running             nginx                     0                   8d56fc4137e10       nginx
	b568cf655451b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago            Running             busybox                   0                   75f1c9d13f880       busybox
	2f135c378870c       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd             3 minutes ago            Running             controller                0                   bab48738f5ef6       ingress-nginx-controller-675c5ddd98-9tfj8
	7b9f0a791d363       08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2                                                             3 minutes ago            Exited              patch                     2                   9210d9b94a3a3       ingress-nginx-admission-patch-4qjrx
	977683233a322       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   4 minutes ago            Exited              create                    0                   f5c2f5f0a61f0       ingress-nginx-admission-create-8hqf6
	66322053750ec       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb            4 minutes ago            Running             gadget                    0                   fe49484fece50       gadget-w7k84
	053001a2131fc       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago            Running             local-path-provisioner    0                   21016e9c7c9fd       local-path-provisioner-648f6765c9-dtw59
	f2d010ef97a0e       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago            Running             minikube-ingress-dns      0                   30a0b2b2e931a       kube-ingress-dns-minikube
	4886e4df95546       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago            Running             storage-provisioner       0                   ebc74e6068dd5       storage-provisioner
	2c2d5997b01b4       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago            Running             amd-gpu-device-plugin     0                   fb8231bb49919       amd-gpu-device-plugin-8jt96
	de69a3e4003d9       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             5 minutes ago            Running             coredns                   0                   2ba1e05eb877d       coredns-66bc5c9577-pwpxp
	3425a4d65e31e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             5 minutes ago            Running             kube-proxy                0                   9e404262bd5b1       kube-proxy-gpl4b
	cceabe1eeef12       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             5 minutes ago            Running             etcd                      0                   bdb0876aa8157       etcd-addons-323324
	25f8730d4a042       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             5 minutes ago            Running             kube-scheduler            0                   87e269ea6eea3       kube-scheduler-addons-323324
	7a9888724e317       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             5 minutes ago            Running             kube-controller-manager   0                   233cfcfa70430       kube-controller-manager-addons-323324
	8da1c7a0ebf7d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             5 minutes ago            Running             kube-apiserver            0                   ff343940be2f6       kube-apiserver-addons-323324
	
	
	==> coredns [de69a3e4003d9a5346bd311f0936e2dde0c744383ac2d378283318e982bf81cb] <==
	[INFO] 10.244.0.9:40781 - 50841 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000453588s
	[INFO] 10.244.0.9:40781 - 14858 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000316325s
	[INFO] 10.244.0.9:40781 - 12515 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.002115981s
	[INFO] 10.244.0.9:40781 - 48111 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000141686s
	[INFO] 10.244.0.9:40781 - 52481 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000115163s
	[INFO] 10.244.0.9:40781 - 50413 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000149427s
	[INFO] 10.244.0.9:40781 - 29652 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000129581s
	[INFO] 10.244.0.9:43914 - 14253 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000165071s
	[INFO] 10.244.0.9:43914 - 14556 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000266555s
	[INFO] 10.244.0.9:32917 - 26560 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000278788s
	[INFO] 10.244.0.9:32917 - 26823 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000359995s
	[INFO] 10.244.0.9:43520 - 2976 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000076596s
	[INFO] 10.244.0.9:43520 - 3199 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00023809s
	[INFO] 10.244.0.9:34071 - 31801 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000166234s
	[INFO] 10.244.0.9:34071 - 31614 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000385123s
	[INFO] 10.244.0.23:37709 - 53506 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000306784s
	[INFO] 10.244.0.23:59340 - 43529 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000209541s
	[INFO] 10.244.0.23:47937 - 65357 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00009252s
	[INFO] 10.244.0.23:54418 - 31229 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000108447s
	[INFO] 10.244.0.23:51226 - 16504 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000174292s
	[INFO] 10.244.0.23:47889 - 28929 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000161781s
	[INFO] 10.244.0.23:56703 - 7847 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.002542401s
	[INFO] 10.244.0.23:35162 - 13418 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.006148778s
	[INFO] 10.244.0.26:53005 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000482451s
	[INFO] 10.244.0.26:37270 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000885934s
	
	
	==> describe nodes <==
	Name:               addons-323324
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-323324
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=addons-323324
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T21_18_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-323324
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 21:18:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-323324
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 21:23:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 21:21:46 +0000   Mon, 13 Oct 2025 21:18:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 21:21:46 +0000   Mon, 13 Oct 2025 21:18:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 21:21:46 +0000   Mon, 13 Oct 2025 21:18:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 21:21:46 +0000   Mon, 13 Oct 2025 21:18:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.156
	  Hostname:    addons-323324
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008584Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008584Ki
	  pods:               110
	System Info:
	  Machine ID:                 0b14f5694ea249739e3346b9b045b1c5
	  System UUID:                0b14f569-4ea2-4973-9e33-46b9b045b1c5
	  Boot ID:                    959d6dbe-3d52-40c3-b6ff-233523966654
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m7s
	  default                     hello-world-app-5d498dc89-mdhnd              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m31s
	  gadget                      gadget-w7k84                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-9tfj8    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m1s
	  kube-system                 amd-gpu-device-plugin-8jt96                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 coredns-66bc5c9577-pwpxp                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m10s
	  kube-system                 etcd-addons-323324                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m17s
	  kube-system                 kube-apiserver-addons-323324                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 kube-controller-manager-addons-323324        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 kube-proxy-gpl4b                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 kube-scheduler-addons-323324                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	  local-path-storage          local-path-provisioner-648f6765c9-dtw59      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m7s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  5m22s (x8 over 5m22s)  kubelet          Node addons-323324 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m22s (x8 over 5m22s)  kubelet          Node addons-323324 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m22s (x7 over 5m22s)  kubelet          Node addons-323324 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m15s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m15s                  kubelet          Node addons-323324 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m15s                  kubelet          Node addons-323324 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m15s                  kubelet          Node addons-323324 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m14s                  kubelet          Node addons-323324 status is now: NodeReady
	  Normal  RegisteredNode           5m11s                  node-controller  Node addons-323324 event: Registered Node addons-323324 in Controller
	
	
	==> dmesg <==
	[  +0.028196] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.004241] kauditd_printk_skb: 292 callbacks suppressed
	[  +0.666463] kauditd_printk_skb: 241 callbacks suppressed
	[Oct13 21:19] kauditd_printk_skb: 444 callbacks suppressed
	[ +15.311695] kauditd_printk_skb: 34 callbacks suppressed
	[  +7.790011] kauditd_printk_skb: 32 callbacks suppressed
	[  +8.527123] kauditd_printk_skb: 11 callbacks suppressed
	[  +8.662263] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.866596] kauditd_printk_skb: 35 callbacks suppressed
	[  +3.427316] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.251157] kauditd_printk_skb: 35 callbacks suppressed
	[Oct13 21:20] kauditd_printk_skb: 181 callbacks suppressed
	[  +5.074513] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.771970] kauditd_printk_skb: 53 callbacks suppressed
	[  +5.837091] kauditd_printk_skb: 26 callbacks suppressed
	[Oct13 21:21] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.020829] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.591714] kauditd_printk_skb: 38 callbacks suppressed
	[  +2.998087] kauditd_printk_skb: 79 callbacks suppressed
	[  +1.342257] kauditd_printk_skb: 203 callbacks suppressed
	[  +2.246543] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.564145] kauditd_printk_skb: 115 callbacks suppressed
	[  +0.962665] kauditd_printk_skb: 113 callbacks suppressed
	[  +6.866641] kauditd_printk_skb: 41 callbacks suppressed
	[Oct13 21:23] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [cceabe1eeef125dd69d3df210fc4f4dec5cf77549e55b7eea52f4d468efe88c0] <==
	{"level":"warn","ts":"2025-10-13T21:20:00.939715Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"197.825393ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumes\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T21:20:00.939746Z","caller":"traceutil/trace.go:172","msg":"trace[1206770530] range","detail":"{range_begin:/registry/persistentvolumes; range_end:; response_count:0; response_revision:1106; }","duration":"197.876316ms","start":"2025-10-13T21:20:00.741863Z","end":"2025-10-13T21:20:00.939739Z","steps":["trace[1206770530] 'agreement among raft nodes before linearized reading'  (duration: 197.796496ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T21:20:00.945926Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"164.098692ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/statefulsets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T21:20:00.948369Z","caller":"traceutil/trace.go:172","msg":"trace[1697793659] range","detail":"{range_begin:/registry/statefulsets; range_end:; response_count:0; response_revision:1106; }","duration":"164.555643ms","start":"2025-10-13T21:20:00.781810Z","end":"2025-10-13T21:20:00.946366Z","steps":["trace[1697793659] 'agreement among raft nodes before linearized reading'  (duration: 164.009307ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T21:20:00.950657Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.377708ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T21:20:00.950704Z","caller":"traceutil/trace.go:172","msg":"trace[965923426] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1107; }","duration":"107.434888ms","start":"2025-10-13T21:20:00.843261Z","end":"2025-10-13T21:20:00.950696Z","steps":["trace[965923426] 'agreement among raft nodes before linearized reading'  (duration: 107.369051ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T21:20:05.563023Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"160.477231ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/amd-gpu-device-plugin-8jt96.186e29ada74b54ec\" limit:1 ","response":"range_response_count:1 size:832"}
	{"level":"info","ts":"2025-10-13T21:20:05.563095Z","caller":"traceutil/trace.go:172","msg":"trace[1724044438] range","detail":"{range_begin:/registry/events/kube-system/amd-gpu-device-plugin-8jt96.186e29ada74b54ec; range_end:; response_count:1; response_revision:1146; }","duration":"160.557209ms","start":"2025-10-13T21:20:05.402527Z","end":"2025-10-13T21:20:05.563084Z","steps":["trace[1724044438] 'range keys from in-memory index tree'  (duration: 160.369211ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T21:20:05.565517Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.891476ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/gcp-auth-certs-patch-kksnx\" limit:1 ","response":"range_response_count:1 size:4045"}
	{"level":"info","ts":"2025-10-13T21:20:05.565568Z","caller":"traceutil/trace.go:172","msg":"trace[1041860144] range","detail":"{range_begin:/registry/pods/gcp-auth/gcp-auth-certs-patch-kksnx; range_end:; response_count:1; response_revision:1146; }","duration":"114.947596ms","start":"2025-10-13T21:20:05.450612Z","end":"2025-10-13T21:20:05.565560Z","steps":["trace[1041860144] 'range keys from in-memory index tree'  (duration: 114.806557ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T21:20:16.276908Z","caller":"traceutil/trace.go:172","msg":"trace[2133296008] transaction","detail":"{read_only:false; response_revision:1210; number_of_response:1; }","duration":"107.179186ms","start":"2025-10-13T21:20:16.169711Z","end":"2025-10-13T21:20:16.276891Z","steps":["trace[2133296008] 'process raft request'  (duration: 107.086818ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T21:20:18.559652Z","caller":"traceutil/trace.go:172","msg":"trace[130455267] transaction","detail":"{read_only:false; response_revision:1217; number_of_response:1; }","duration":"142.028265ms","start":"2025-10-13T21:20:18.417613Z","end":"2025-10-13T21:20:18.559641Z","steps":["trace[130455267] 'process raft request'  (duration: 141.756142ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T21:20:24.690215Z","caller":"traceutil/trace.go:172","msg":"trace[927742210] transaction","detail":"{read_only:false; response_revision:1239; number_of_response:1; }","duration":"232.354774ms","start":"2025-10-13T21:20:24.457847Z","end":"2025-10-13T21:20:24.690202Z","steps":["trace[927742210] 'process raft request'  (duration: 232.205104ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T21:21:13.106797Z","caller":"traceutil/trace.go:172","msg":"trace[1407372465] transaction","detail":"{read_only:false; response_revision:1412; number_of_response:1; }","duration":"110.511669ms","start":"2025-10-13T21:21:12.996270Z","end":"2025-10-13T21:21:13.106782Z","steps":["trace[1407372465] 'process raft request'  (duration: 110.38973ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T21:21:14.206983Z","caller":"traceutil/trace.go:172","msg":"trace[1802151420] transaction","detail":"{read_only:false; response_revision:1436; number_of_response:1; }","duration":"139.255408ms","start":"2025-10-13T21:21:14.067714Z","end":"2025-10-13T21:21:14.206970Z","steps":["trace[1802151420] 'process raft request'  (duration: 139.139855ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T21:21:20.977950Z","caller":"traceutil/trace.go:172","msg":"trace[932456015] linearizableReadLoop","detail":"{readStateIndex:1518; appliedIndex:1518; }","duration":"226.965371ms","start":"2025-10-13T21:21:20.750956Z","end":"2025-10-13T21:21:20.977921Z","steps":["trace[932456015] 'read index received'  (duration: 226.896657ms)","trace[932456015] 'applied index is now lower than readState.Index'  (duration: 67.612µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-13T21:21:20.978179Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"227.257433ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T21:21:20.978204Z","caller":"traceutil/trace.go:172","msg":"trace[2085553658] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1464; }","duration":"227.316489ms","start":"2025-10-13T21:21:20.750881Z","end":"2025-10-13T21:21:20.978198Z","steps":["trace[2085553658] 'agreement among raft nodes before linearized reading'  (duration: 227.225516ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T21:21:20.978686Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"141.367683ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" limit:1 ","response":"range_response_count:1 size:3395"}
	{"level":"info","ts":"2025-10-13T21:21:20.978734Z","caller":"traceutil/trace.go:172","msg":"trace[1725454270] range","detail":"{range_begin:/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset; range_end:; response_count:1; response_revision:1464; }","duration":"141.422393ms","start":"2025-10-13T21:21:20.837305Z","end":"2025-10-13T21:21:20.978728Z","steps":["trace[1725454270] 'agreement among raft nodes before linearized reading'  (duration: 141.290422ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T21:21:20.979017Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"109.328887ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T21:21:20.979070Z","caller":"traceutil/trace.go:172","msg":"trace[1678961649] range","detail":"{range_begin:/registry/csinodes; range_end:; response_count:0; response_revision:1464; }","duration":"109.46178ms","start":"2025-10-13T21:21:20.869597Z","end":"2025-10-13T21:21:20.979059Z","steps":["trace[1678961649] 'agreement among raft nodes before linearized reading'  (duration: 109.257134ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T21:21:20.979247Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"137.729834ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-13T21:21:20.979831Z","caller":"traceutil/trace.go:172","msg":"trace[814342893] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1464; }","duration":"138.311389ms","start":"2025-10-13T21:21:20.841509Z","end":"2025-10-13T21:21:20.979821Z","steps":["trace[814342893] 'agreement among raft nodes before linearized reading'  (duration: 137.717738ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T21:21:38.608053Z","caller":"traceutil/trace.go:172","msg":"trace[401854682] transaction","detail":"{read_only:false; response_revision:1672; number_of_response:1; }","duration":"247.893333ms","start":"2025-10-13T21:21:38.360102Z","end":"2025-10-13T21:21:38.607995Z","steps":["trace[401854682] 'process raft request'  (duration: 247.804841ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:23:56 up 5 min,  0 users,  load average: 0.54, 1.17, 0.65
	Linux addons-323324 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [8da1c7a0ebf7decf526003d063f40efab7547f1a7c6d6649ff7da760b62ec6d5] <==
	E1013 21:19:38.391258       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.246.217:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.246.217:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.246.217:443: connect: connection refused" logger="UnhandledError"
	E1013 21:19:38.412858       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.246.217:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.246.217:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.246.217:443: connect: connection refused" logger="UnhandledError"
	E1013 21:19:38.454776       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.246.217:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.246.217:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.246.217:443: connect: connection refused" logger="UnhandledError"
	I1013 21:19:38.631886       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1013 21:20:59.142797       1 conn.go:339] Error on socket receive: read tcp 192.168.39.156:8443->192.168.39.1:43090: use of closed network connection
	E1013 21:20:59.337198       1 conn.go:339] Error on socket receive: read tcp 192.168.39.156:8443->192.168.39.1:43100: use of closed network connection
	I1013 21:21:08.663868       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.79.15"}
	I1013 21:21:25.066463       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1013 21:21:25.341630       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.102.141"}
	I1013 21:21:36.472265       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1013 21:21:39.431549       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1013 21:21:59.437588       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1013 21:21:59.437742       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1013 21:21:59.473276       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1013 21:21:59.473516       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1013 21:21:59.498330       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1013 21:21:59.498527       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1013 21:21:59.509805       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1013 21:21:59.509857       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1013 21:21:59.585141       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1013 21:21:59.585270       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1013 21:22:00.497828       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1013 21:22:00.591168       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1013 21:22:00.662804       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1013 21:23:54.476320       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.236.242"}
	
	
	==> kube-controller-manager [7a9888724e317aa48ab13fcd80d2fcd3a8731bf3e687660015e3312435c56a68] <==
	E1013 21:22:15.883166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1013 21:22:15.912769       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1013 21:22:15.912825       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 21:22:15.990847       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1013 21:22:15.990921       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1013 21:22:16.330707       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 21:22:16.332079       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 21:22:17.401516       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 21:22:17.403001       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 21:22:29.051266       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 21:22:29.052728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 21:22:31.896626       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 21:22:31.897798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 21:22:38.038667       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 21:22:38.040455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 21:23:00.201906       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 21:23:00.203133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 21:23:03.623666       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 21:23:03.625072       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 21:23:16.869999       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 21:23:16.871248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 21:23:39.459841       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 21:23:39.461323       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1013 21:23:56.032182       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1013 21:23:56.034291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [3425a4d65e31eaca6b66edf83b549969ab54ccaefaeb9bf5dac23e56c7bf4add] <==
	I1013 21:18:48.452219       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 21:18:48.555667       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 21:18:48.555720       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.156"]
	E1013 21:18:48.565932       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 21:18:48.916768       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1013 21:18:48.916838       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1013 21:18:48.916867       1 server_linux.go:132] "Using iptables Proxier"
	I1013 21:18:48.964716       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 21:18:48.965104       1 server.go:527] "Version info" version="v1.34.1"
	I1013 21:18:48.965115       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 21:18:48.976714       1 config.go:200] "Starting service config controller"
	I1013 21:18:48.978197       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 21:18:48.978250       1 config.go:106] "Starting endpoint slice config controller"
	I1013 21:18:48.978256       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 21:18:48.978269       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 21:18:48.978272       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 21:18:48.981060       1 config.go:309] "Starting node config controller"
	I1013 21:18:48.981092       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 21:18:48.981098       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 21:18:49.078311       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 21:18:49.078509       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 21:18:49.078564       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [25f8730d4a0423299ed168fe403997aa9d10e446e5138fe38c585761bef40ef6] <==
	E1013 21:18:38.820718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 21:18:38.820777       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1013 21:18:38.820817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 21:18:38.824596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 21:18:38.824829       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 21:18:38.824525       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 21:18:38.824922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 21:18:38.824963       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1013 21:18:38.825297       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 21:18:38.825471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 21:18:38.825549       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1013 21:18:39.712822       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 21:18:39.734108       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1013 21:18:39.739934       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 21:18:39.759681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 21:18:39.861356       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 21:18:39.919296       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 21:18:39.944022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 21:18:39.953964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1013 21:18:40.022126       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1013 21:18:40.082374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1013 21:18:40.095366       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 21:18:40.148990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1013 21:18:40.189057       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1013 21:18:41.809911       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 21:22:12 addons-323324 kubelet[1517]: E1013 21:22:12.028034    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760390532027498289  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 13 21:22:22 addons-323324 kubelet[1517]: E1013 21:22:22.031175    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760390542030706646  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 13 21:22:22 addons-323324 kubelet[1517]: E1013 21:22:22.031200    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760390542030706646  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 13 21:22:32 addons-323324 kubelet[1517]: E1013 21:22:32.034758    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760390552034210006  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 13 21:22:32 addons-323324 kubelet[1517]: E1013 21:22:32.035314    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760390552034210006  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 13 21:22:36 addons-323324 kubelet[1517]: I1013 21:22:36.397381    1517 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-8jt96" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 21:22:42 addons-323324 kubelet[1517]: E1013 21:22:42.038065    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760390562037630938  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 13 21:22:42 addons-323324 kubelet[1517]: E1013 21:22:42.038110    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760390562037630938  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 13 21:22:52 addons-323324 kubelet[1517]: E1013 21:22:52.042045    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760390572041554121  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 13 21:22:52 addons-323324 kubelet[1517]: E1013 21:22:52.042092    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760390572041554121  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 13 21:23:02 addons-323324 kubelet[1517]: E1013 21:23:02.044575    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760390582043979968  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 13 21:23:02 addons-323324 kubelet[1517]: E1013 21:23:02.044652    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760390582043979968  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 13 21:23:12 addons-323324 kubelet[1517]: E1013 21:23:12.049725    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760390592048929416  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 13 21:23:12 addons-323324 kubelet[1517]: E1013 21:23:12.049768    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760390592048929416  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 13 21:23:15 addons-323324 kubelet[1517]: I1013 21:23:15.398457    1517 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 21:23:22 addons-323324 kubelet[1517]: E1013 21:23:22.053561    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760390602052920460  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 13 21:23:22 addons-323324 kubelet[1517]: E1013 21:23:22.053586    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760390602052920460  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 13 21:23:32 addons-323324 kubelet[1517]: E1013 21:23:32.056182    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760390612055812669  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 13 21:23:32 addons-323324 kubelet[1517]: E1013 21:23:32.056213    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760390612055812669  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 13 21:23:42 addons-323324 kubelet[1517]: E1013 21:23:42.059488    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760390622058730051  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 13 21:23:42 addons-323324 kubelet[1517]: E1013 21:23:42.059526    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760390622058730051  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 13 21:23:52 addons-323324 kubelet[1517]: E1013 21:23:52.063227    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760390632062467871  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 13 21:23:52 addons-323324 kubelet[1517]: E1013 21:23:52.063530    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760390632062467871  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 13 21:23:54 addons-323324 kubelet[1517]: I1013 21:23:54.482348    1517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hnc2\" (UniqueName: \"kubernetes.io/projected/fc7056db-e3b1-41cf-8f34-2c716f5b686c-kube-api-access-6hnc2\") pod \"hello-world-app-5d498dc89-mdhnd\" (UID: \"fc7056db-e3b1-41cf-8f34-2c716f5b686c\") " pod="default/hello-world-app-5d498dc89-mdhnd"
	Oct 13 21:23:56 addons-323324 kubelet[1517]: I1013 21:23:56.317340    1517 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-mdhnd" podStartSLOduration=1.569936862 podStartE2EDuration="2.317312633s" podCreationTimestamp="2025-10-13 21:23:54 +0000 UTC" firstStartedPulling="2025-10-13 21:23:55.030215079 +0000 UTC m=+313.794640441" lastFinishedPulling="2025-10-13 21:23:55.777590844 +0000 UTC m=+314.542016212" observedRunningTime="2025-10-13 21:23:56.312465772 +0000 UTC m=+315.076891153" watchObservedRunningTime="2025-10-13 21:23:56.317312633 +0000 UTC m=+315.081737997"
	
	
	==> storage-provisioner [4886e4df955463c3857d14bacf52642ac4532fd6160f3b1ecba1bb73dfa08140] <==
	W1013 21:23:32.192126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:34.196196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:34.201580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:36.205591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:36.211695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:38.215694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:38.221468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:40.225265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:40.231470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:42.235994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:42.244866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:44.249054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:44.255268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:46.259917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:46.265975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:48.269538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:48.307596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:50.311906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:50.317276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:52.324542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:52.330581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:54.335280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:54.346288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:56.367545       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:23:56.382586       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-323324 -n addons-323324
helpers_test.go:269: (dbg) Run:  kubectl --context addons-323324 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-8hqf6 ingress-nginx-admission-patch-4qjrx
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-323324 describe pod ingress-nginx-admission-create-8hqf6 ingress-nginx-admission-patch-4qjrx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-323324 describe pod ingress-nginx-admission-create-8hqf6 ingress-nginx-admission-patch-4qjrx: exit status 1 (56.822505ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-8hqf6" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-4qjrx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-323324 describe pod ingress-nginx-admission-create-8hqf6 ingress-nginx-admission-patch-4qjrx: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323324 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-323324 addons disable ingress-dns --alsologtostderr -v=1: (1.084303449s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323324 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-323324 addons disable ingress --alsologtostderr -v=1: (7.812304094s)
--- FAIL: TestAddons/parallel/Ingress (161.44s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (351.93s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-613120 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1013 21:30:49.952057   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:31:17.662248   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-613120 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5m49.929426109s)

                                                
                                                
-- stdout --
	* [functional-613120] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-15625/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-15625/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "functional-613120" primary control-plane node in "functional-613120" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-amd64 start -p functional-613120 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:776: restart took 5m49.929640613s for "functional-613120" cluster.
I1013 21:35:39.325805   19947 config.go:182] Loaded profile config "functional-613120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-613120 -n functional-613120
helpers_test.go:252: <<< TestFunctional/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-613120 logs -n 25: (1.399385345s)
helpers_test.go:260: TestFunctional/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                  ARGS                                                                   │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ unpause │ nospam-455348 --log_dir /tmp/nospam-455348 unpause                                                                                      │ nospam-455348     │ jenkins │ v1.37.0 │ 13 Oct 25 21:26 UTC │ 13 Oct 25 21:26 UTC │
	│ unpause │ nospam-455348 --log_dir /tmp/nospam-455348 unpause                                                                                      │ nospam-455348     │ jenkins │ v1.37.0 │ 13 Oct 25 21:26 UTC │ 13 Oct 25 21:26 UTC │
	│ unpause │ nospam-455348 --log_dir /tmp/nospam-455348 unpause                                                                                      │ nospam-455348     │ jenkins │ v1.37.0 │ 13 Oct 25 21:26 UTC │ 13 Oct 25 21:26 UTC │
	│ stop    │ nospam-455348 --log_dir /tmp/nospam-455348 stop                                                                                         │ nospam-455348     │ jenkins │ v1.37.0 │ 13 Oct 25 21:26 UTC │ 13 Oct 25 21:27 UTC │
	│ stop    │ nospam-455348 --log_dir /tmp/nospam-455348 stop                                                                                         │ nospam-455348     │ jenkins │ v1.37.0 │ 13 Oct 25 21:27 UTC │ 13 Oct 25 21:27 UTC │
	│ stop    │ nospam-455348 --log_dir /tmp/nospam-455348 stop                                                                                         │ nospam-455348     │ jenkins │ v1.37.0 │ 13 Oct 25 21:27 UTC │ 13 Oct 25 21:27 UTC │
	│ delete  │ -p nospam-455348                                                                                                                        │ nospam-455348     │ jenkins │ v1.37.0 │ 13 Oct 25 21:27 UTC │ 13 Oct 25 21:27 UTC │
	│ start   │ -p functional-613120 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:27 UTC │ 13 Oct 25 21:29 UTC │
	│ start   │ -p functional-613120 --alsologtostderr -v=8                                                                                             │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:29 UTC │ 13 Oct 25 21:29 UTC │
	│ cache   │ functional-613120 cache add registry.k8s.io/pause:3.1                                                                                   │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:29 UTC │ 13 Oct 25 21:29 UTC │
	│ cache   │ functional-613120 cache add registry.k8s.io/pause:3.3                                                                                   │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:29 UTC │ 13 Oct 25 21:29 UTC │
	│ cache   │ functional-613120 cache add registry.k8s.io/pause:latest                                                                                │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:29 UTC │ 13 Oct 25 21:29 UTC │
	│ cache   │ functional-613120 cache add minikube-local-cache-test:functional-613120                                                                 │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:29 UTC │ 13 Oct 25 21:29 UTC │
	│ cache   │ functional-613120 cache delete minikube-local-cache-test:functional-613120                                                              │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:29 UTC │ 13 Oct 25 21:29 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 13 Oct 25 21:29 UTC │ 13 Oct 25 21:29 UTC │
	│ cache   │ list                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 13 Oct 25 21:29 UTC │ 13 Oct 25 21:29 UTC │
	│ ssh     │ functional-613120 ssh sudo crictl images                                                                                                │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:29 UTC │ 13 Oct 25 21:29 UTC │
	│ ssh     │ functional-613120 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                      │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:29 UTC │ 13 Oct 25 21:29 UTC │
	│ ssh     │ functional-613120 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                 │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:29 UTC │                     │
	│ cache   │ functional-613120 cache reload                                                                                                          │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:29 UTC │ 13 Oct 25 21:29 UTC │
	│ ssh     │ functional-613120 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                 │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:29 UTC │ 13 Oct 25 21:29 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 13 Oct 25 21:29 UTC │ 13 Oct 25 21:29 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 13 Oct 25 21:29 UTC │ 13 Oct 25 21:29 UTC │
	│ kubectl │ functional-613120 kubectl -- --context functional-613120 get pods                                                                       │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:29 UTC │ 13 Oct 25 21:29 UTC │
	│ start   │ -p functional-613120 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:29 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 21:29:49
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 21:29:49.440815   26276 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:29:49.441076   26276 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:29:49.441080   26276 out.go:374] Setting ErrFile to fd 2...
	I1013 21:29:49.441084   26276 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:29:49.441341   26276 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-15625/.minikube/bin
	I1013 21:29:49.441755   26276 out.go:368] Setting JSON to false
	I1013 21:29:49.442698   26276 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4337,"bootTime":1760386652,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 21:29:49.442764   26276 start.go:141] virtualization: kvm guest
	I1013 21:29:49.444898   26276 out.go:179] * [functional-613120] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 21:29:49.446364   26276 notify.go:220] Checking for updates...
	I1013 21:29:49.446391   26276 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 21:29:49.447666   26276 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 21:29:49.448777   26276 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-15625/kubeconfig
	I1013 21:29:49.449957   26276 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-15625/.minikube
	I1013 21:29:49.451011   26276 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 21:29:49.452209   26276 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 21:29:49.453724   26276 config.go:182] Loaded profile config "functional-613120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:29:49.453796   26276 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 21:29:49.454257   26276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:29:49.454309   26276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:29:49.467644   26276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41343
	I1013 21:29:49.468202   26276 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:29:49.468707   26276 main.go:141] libmachine: Using API Version  1
	I1013 21:29:49.468729   26276 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:29:49.469085   26276 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:29:49.469326   26276 main.go:141] libmachine: (functional-613120) Calling .DriverName
	I1013 21:29:49.499670   26276 out.go:179] * Using the kvm2 driver based on existing profile
	I1013 21:29:49.501112   26276 start.go:305] selected driver: kvm2
	I1013 21:29:49.501121   26276 start.go:925] validating driver "kvm2" against &{Name:functional-613120 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-613120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.113 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:29:49.501247   26276 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 21:29:49.501550   26276 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 21:29:49.501615   26276 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-15625/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 21:29:49.515972   26276 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 21:29:49.515992   26276 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-15625/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 21:29:49.529247   26276 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 21:29:49.529919   26276 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 21:29:49.529939   26276 cni.go:84] Creating CNI manager for ""
	I1013 21:29:49.529993   26276 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1013 21:29:49.530039   26276 start.go:349] cluster config:
	{Name:functional-613120 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-613120 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.113 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:29:49.530129   26276 iso.go:125] acquiring lock: {Name:mkb744e09089d0ab8a5ae3294003cf719d380bf8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 21:29:49.532533   26276 out.go:179] * Starting "functional-613120" primary control-plane node in "functional-613120" cluster
	I1013 21:29:49.533706   26276 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 21:29:49.533732   26276 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-15625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1013 21:29:49.533737   26276 cache.go:58] Caching tarball of preloaded images
	I1013 21:29:49.533834   26276 preload.go:233] Found /home/jenkins/minikube-integration/21724-15625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1013 21:29:49.533841   26276 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 21:29:49.533920   26276 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/config.json ...
	I1013 21:29:49.534096   26276 start.go:360] acquireMachinesLock for functional-613120: {Name:mk81e7d45b6c30d879e4077cd05b64f26ced767a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1013 21:29:49.534133   26276 start.go:364] duration metric: took 24.968µs to acquireMachinesLock for "functional-613120"
	I1013 21:29:49.534143   26276 start.go:96] Skipping create...Using existing machine configuration
	I1013 21:29:49.534146   26276 fix.go:54] fixHost starting: 
	I1013 21:29:49.534425   26276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:29:49.534451   26276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:29:49.547203   26276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36567
	I1013 21:29:49.547582   26276 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:29:49.547943   26276 main.go:141] libmachine: Using API Version  1
	I1013 21:29:49.547964   26276 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:29:49.548347   26276 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:29:49.548595   26276 main.go:141] libmachine: (functional-613120) Calling .DriverName
	I1013 21:29:49.548758   26276 main.go:141] libmachine: (functional-613120) Calling .GetState
	I1013 21:29:49.550458   26276 fix.go:112] recreateIfNeeded on functional-613120: state=Running err=<nil>
	W1013 21:29:49.550478   26276 fix.go:138] unexpected machine state, will restart: <nil>
	I1013 21:29:49.552106   26276 out.go:252] * Updating the running kvm2 "functional-613120" VM ...
	I1013 21:29:49.552128   26276 machine.go:93] provisionDockerMachine start ...
	I1013 21:29:49.552138   26276 main.go:141] libmachine: (functional-613120) Calling .DriverName
	I1013 21:29:49.552324   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHHostname
	I1013 21:29:49.554836   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:49.555236   26276 main.go:141] libmachine: (functional-613120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:28:1e", ip: ""} in network mk-functional-613120: {Iface:virbr1 ExpiryTime:2025-10-13 22:27:59 +0000 UTC Type:0 Mac:52:54:00:9f:28:1e Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-613120 Clientid:01:52:54:00:9f:28:1e}
	I1013 21:29:49.555253   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined IP address 192.168.39.113 and MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:49.555381   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHPort
	I1013 21:29:49.555532   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHKeyPath
	I1013 21:29:49.555660   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHKeyPath
	I1013 21:29:49.555744   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHUsername
	I1013 21:29:49.555851   26276 main.go:141] libmachine: Using SSH client type: native
	I1013 21:29:49.556048   26276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I1013 21:29:49.556053   26276 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 21:29:49.674883   26276 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-613120
	
	I1013 21:29:49.674900   26276 main.go:141] libmachine: (functional-613120) Calling .GetMachineName
	I1013 21:29:49.675116   26276 buildroot.go:166] provisioning hostname "functional-613120"
	I1013 21:29:49.675133   26276 main.go:141] libmachine: (functional-613120) Calling .GetMachineName
	I1013 21:29:49.675325   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHHostname
	I1013 21:29:49.678135   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:49.678563   26276 main.go:141] libmachine: (functional-613120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:28:1e", ip: ""} in network mk-functional-613120: {Iface:virbr1 ExpiryTime:2025-10-13 22:27:59 +0000 UTC Type:0 Mac:52:54:00:9f:28:1e Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-613120 Clientid:01:52:54:00:9f:28:1e}
	I1013 21:29:49.678585   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined IP address 192.168.39.113 and MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:49.678720   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHPort
	I1013 21:29:49.678882   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHKeyPath
	I1013 21:29:49.679003   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHKeyPath
	I1013 21:29:49.679109   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHUsername
	I1013 21:29:49.679313   26276 main.go:141] libmachine: Using SSH client type: native
	I1013 21:29:49.679506   26276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I1013 21:29:49.679512   26276 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-613120 && echo "functional-613120" | sudo tee /etc/hostname
	I1013 21:29:49.815823   26276 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-613120
	
	I1013 21:29:49.815838   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHHostname
	I1013 21:29:49.818953   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:49.819476   26276 main.go:141] libmachine: (functional-613120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:28:1e", ip: ""} in network mk-functional-613120: {Iface:virbr1 ExpiryTime:2025-10-13 22:27:59 +0000 UTC Type:0 Mac:52:54:00:9f:28:1e Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-613120 Clientid:01:52:54:00:9f:28:1e}
	I1013 21:29:49.819488   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined IP address 192.168.39.113 and MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:49.819705   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHPort
	I1013 21:29:49.819911   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHKeyPath
	I1013 21:29:49.820075   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHKeyPath
	I1013 21:29:49.820240   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHUsername
	I1013 21:29:49.820363   26276 main.go:141] libmachine: Using SSH client type: native
	I1013 21:29:49.820542   26276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I1013 21:29:49.820551   26276 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-613120' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-613120/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-613120' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 21:29:49.938900   26276 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 21:29:49.938941   26276 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21724-15625/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-15625/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-15625/.minikube}
	I1013 21:29:49.938963   26276 buildroot.go:174] setting up certificates
	I1013 21:29:49.938981   26276 provision.go:84] configureAuth start
	I1013 21:29:49.938988   26276 main.go:141] libmachine: (functional-613120) Calling .GetMachineName
	I1013 21:29:49.939310   26276 main.go:141] libmachine: (functional-613120) Calling .GetIP
	I1013 21:29:49.942965   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:49.943473   26276 main.go:141] libmachine: (functional-613120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:28:1e", ip: ""} in network mk-functional-613120: {Iface:virbr1 ExpiryTime:2025-10-13 22:27:59 +0000 UTC Type:0 Mac:52:54:00:9f:28:1e Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-613120 Clientid:01:52:54:00:9f:28:1e}
	I1013 21:29:49.943496   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined IP address 192.168.39.113 and MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:49.943725   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHHostname
	I1013 21:29:49.946410   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:49.946746   26276 main.go:141] libmachine: (functional-613120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:28:1e", ip: ""} in network mk-functional-613120: {Iface:virbr1 ExpiryTime:2025-10-13 22:27:59 +0000 UTC Type:0 Mac:52:54:00:9f:28:1e Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-613120 Clientid:01:52:54:00:9f:28:1e}
	I1013 21:29:49.946757   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined IP address 192.168.39.113 and MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:49.946878   26276 provision.go:143] copyHostCerts
	I1013 21:29:49.946918   26276 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-15625/.minikube/ca.pem, removing ...
	I1013 21:29:49.946931   26276 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-15625/.minikube/ca.pem
	I1013 21:29:49.947007   26276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-15625/.minikube/ca.pem (1078 bytes)
	I1013 21:29:49.947111   26276 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-15625/.minikube/cert.pem, removing ...
	I1013 21:29:49.947114   26276 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-15625/.minikube/cert.pem
	I1013 21:29:49.947139   26276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-15625/.minikube/cert.pem (1123 bytes)
	I1013 21:29:49.947235   26276 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-15625/.minikube/key.pem, removing ...
	I1013 21:29:49.947239   26276 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-15625/.minikube/key.pem
	I1013 21:29:49.947267   26276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-15625/.minikube/key.pem (1675 bytes)
	I1013 21:29:49.947340   26276 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-15625/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca-key.pem org=jenkins.functional-613120 san=[127.0.0.1 192.168.39.113 functional-613120 localhost minikube]
	I1013 21:29:50.505214   26276 provision.go:177] copyRemoteCerts
	I1013 21:29:50.505256   26276 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 21:29:50.505275   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHHostname
	I1013 21:29:50.508239   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:50.508580   26276 main.go:141] libmachine: (functional-613120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:28:1e", ip: ""} in network mk-functional-613120: {Iface:virbr1 ExpiryTime:2025-10-13 22:27:59 +0000 UTC Type:0 Mac:52:54:00:9f:28:1e Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-613120 Clientid:01:52:54:00:9f:28:1e}
	I1013 21:29:50.508601   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined IP address 192.168.39.113 and MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:50.508823   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHPort
	I1013 21:29:50.509037   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHKeyPath
	I1013 21:29:50.509276   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHUsername
	I1013 21:29:50.509453   26276 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/functional-613120/id_rsa Username:docker}
	I1013 21:29:50.601983   26276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1013 21:29:50.637861   26276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1013 21:29:50.670340   26276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1013 21:29:50.701779   26276 provision.go:87] duration metric: took 762.786839ms to configureAuth
	I1013 21:29:50.701796   26276 buildroot.go:189] setting minikube options for container-runtime
	I1013 21:29:50.701960   26276 config.go:182] Loaded profile config "functional-613120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:29:50.702027   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHHostname
	I1013 21:29:50.704884   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:50.705260   26276 main.go:141] libmachine: (functional-613120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:28:1e", ip: ""} in network mk-functional-613120: {Iface:virbr1 ExpiryTime:2025-10-13 22:27:59 +0000 UTC Type:0 Mac:52:54:00:9f:28:1e Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-613120 Clientid:01:52:54:00:9f:28:1e}
	I1013 21:29:50.705271   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined IP address 192.168.39.113 and MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:50.705442   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHPort
	I1013 21:29:50.705663   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHKeyPath
	I1013 21:29:50.705827   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHKeyPath
	I1013 21:29:50.705977   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHUsername
	I1013 21:29:50.706141   26276 main.go:141] libmachine: Using SSH client type: native
	I1013 21:29:50.706368   26276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I1013 21:29:50.706376   26276 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 21:29:56.406555   26276 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 21:29:56.406569   26276 machine.go:96] duration metric: took 6.854435483s to provisionDockerMachine
	I1013 21:29:56.406579   26276 start.go:293] postStartSetup for "functional-613120" (driver="kvm2")
	I1013 21:29:56.406587   26276 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 21:29:56.406600   26276 main.go:141] libmachine: (functional-613120) Calling .DriverName
	I1013 21:29:56.406931   26276 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 21:29:56.406992   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHHostname
	I1013 21:29:56.409790   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:56.410210   26276 main.go:141] libmachine: (functional-613120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:28:1e", ip: ""} in network mk-functional-613120: {Iface:virbr1 ExpiryTime:2025-10-13 22:27:59 +0000 UTC Type:0 Mac:52:54:00:9f:28:1e Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-613120 Clientid:01:52:54:00:9f:28:1e}
	I1013 21:29:56.410233   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined IP address 192.168.39.113 and MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:56.410372   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHPort
	I1013 21:29:56.410581   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHKeyPath
	I1013 21:29:56.410732   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHUsername
	I1013 21:29:56.410909   26276 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/functional-613120/id_rsa Username:docker}
	I1013 21:29:56.500379   26276 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 21:29:56.505569   26276 info.go:137] Remote host: Buildroot 2025.02
	I1013 21:29:56.505594   26276 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-15625/.minikube/addons for local assets ...
	I1013 21:29:56.505660   26276 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-15625/.minikube/files for local assets ...
	I1013 21:29:56.505766   26276 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-15625/.minikube/files/etc/ssl/certs/199472.pem -> 199472.pem in /etc/ssl/certs
	I1013 21:29:56.505830   26276 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-15625/.minikube/files/etc/test/nested/copy/19947/hosts -> hosts in /etc/test/nested/copy/19947
	I1013 21:29:56.505860   26276 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/19947
	I1013 21:29:56.518395   26276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/files/etc/ssl/certs/199472.pem --> /etc/ssl/certs/199472.pem (1708 bytes)
	I1013 21:29:56.554366   26276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/files/etc/test/nested/copy/19947/hosts --> /etc/test/nested/copy/19947/hosts (40 bytes)
	I1013 21:29:56.590660   26276 start.go:296] duration metric: took 184.067966ms for postStartSetup
	I1013 21:29:56.590689   26276 fix.go:56] duration metric: took 7.056542663s for fixHost
	I1013 21:29:56.590706   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHHostname
	I1013 21:29:56.593883   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:56.594280   26276 main.go:141] libmachine: (functional-613120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:28:1e", ip: ""} in network mk-functional-613120: {Iface:virbr1 ExpiryTime:2025-10-13 22:27:59 +0000 UTC Type:0 Mac:52:54:00:9f:28:1e Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-613120 Clientid:01:52:54:00:9f:28:1e}
	I1013 21:29:56.594319   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined IP address 192.168.39.113 and MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:56.594495   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHPort
	I1013 21:29:56.594700   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHKeyPath
	I1013 21:29:56.594867   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHKeyPath
	I1013 21:29:56.595050   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHUsername
	I1013 21:29:56.595241   26276 main.go:141] libmachine: Using SSH client type: native
	I1013 21:29:56.595431   26276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I1013 21:29:56.595435   26276 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1013 21:29:56.716086   26276 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760390996.709359886
	
	I1013 21:29:56.716097   26276 fix.go:216] guest clock: 1760390996.709359886
	I1013 21:29:56.716104   26276 fix.go:229] Guest: 2025-10-13 21:29:56.709359886 +0000 UTC Remote: 2025-10-13 21:29:56.590691256 +0000 UTC m=+7.190574892 (delta=118.66863ms)
	I1013 21:29:56.716122   26276 fix.go:200] guest clock delta is within tolerance: 118.66863ms
	I1013 21:29:56.716126   26276 start.go:83] releasing machines lock for "functional-613120", held for 7.181988331s
	I1013 21:29:56.716141   26276 main.go:141] libmachine: (functional-613120) Calling .DriverName
	I1013 21:29:56.716396   26276 main.go:141] libmachine: (functional-613120) Calling .GetIP
	I1013 21:29:56.719177   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:56.719606   26276 main.go:141] libmachine: (functional-613120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:28:1e", ip: ""} in network mk-functional-613120: {Iface:virbr1 ExpiryTime:2025-10-13 22:27:59 +0000 UTC Type:0 Mac:52:54:00:9f:28:1e Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-613120 Clientid:01:52:54:00:9f:28:1e}
	I1013 21:29:56.719630   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined IP address 192.168.39.113 and MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:56.719802   26276 main.go:141] libmachine: (functional-613120) Calling .DriverName
	I1013 21:29:56.720346   26276 main.go:141] libmachine: (functional-613120) Calling .DriverName
	I1013 21:29:56.720498   26276 main.go:141] libmachine: (functional-613120) Calling .DriverName
	I1013 21:29:56.720600   26276 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 21:29:56.720630   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHHostname
	I1013 21:29:56.720682   26276 ssh_runner.go:195] Run: cat /version.json
	I1013 21:29:56.720696   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHHostname
	I1013 21:29:56.723704   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:56.723737   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:56.724063   26276 main.go:141] libmachine: (functional-613120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:28:1e", ip: ""} in network mk-functional-613120: {Iface:virbr1 ExpiryTime:2025-10-13 22:27:59 +0000 UTC Type:0 Mac:52:54:00:9f:28:1e Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-613120 Clientid:01:52:54:00:9f:28:1e}
	I1013 21:29:56.724089   26276 main.go:141] libmachine: (functional-613120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:28:1e", ip: ""} in network mk-functional-613120: {Iface:virbr1 ExpiryTime:2025-10-13 22:27:59 +0000 UTC Type:0 Mac:52:54:00:9f:28:1e Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-613120 Clientid:01:52:54:00:9f:28:1e}
	I1013 21:29:56.724110   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined IP address 192.168.39.113 and MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:56.724231   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined IP address 192.168.39.113 and MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:56.724316   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHPort
	I1013 21:29:56.724506   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHKeyPath
	I1013 21:29:56.724603   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHPort
	I1013 21:29:56.724669   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHUsername
	I1013 21:29:56.724737   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHKeyPath
	I1013 21:29:56.724796   26276 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/functional-613120/id_rsa Username:docker}
	I1013 21:29:56.724843   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHUsername
	I1013 21:29:56.724963   26276 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/functional-613120/id_rsa Username:docker}
	I1013 21:29:56.870143   26276 ssh_runner.go:195] Run: systemctl --version
	I1013 21:29:56.923214   26276 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 21:29:57.180062   26276 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 21:29:57.199701   26276 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 21:29:57.199766   26276 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 21:29:57.224345   26276 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1013 21:29:57.224360   26276 start.go:495] detecting cgroup driver to use...
	I1013 21:29:57.224421   26276 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 21:29:57.265666   26276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 21:29:57.310498   26276 docker.go:218] disabling cri-docker service (if available) ...
	I1013 21:29:57.310560   26276 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 21:29:57.350114   26276 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 21:29:57.380389   26276 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 21:29:57.644994   26276 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 21:29:57.837112   26276 docker.go:234] disabling docker service ...
	I1013 21:29:57.837182   26276 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 21:29:57.866375   26276 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 21:29:57.892095   26276 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 21:29:58.129135   26276 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 21:29:58.311135   26276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 21:29:58.328458   26276 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 21:29:58.354029   26276 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 21:29:58.354086   26276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:29:58.367309   26276 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 21:29:58.367358   26276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:29:58.380815   26276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:29:58.395036   26276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:29:58.408251   26276 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 21:29:58.423198   26276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:29:58.436621   26276 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:29:58.450843   26276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:29:58.463563   26276 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 21:29:58.474692   26276 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 21:29:58.487040   26276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 21:29:58.665117   26276 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 21:31:29.023052   26276 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.357908146s)
	I1013 21:31:29.023102   26276 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 21:31:29.023180   26276 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 21:31:29.029455   26276 start.go:563] Will wait 60s for crictl version
	I1013 21:31:29.029504   26276 ssh_runner.go:195] Run: which crictl
	I1013 21:31:29.034464   26276 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1013 21:31:29.076608   26276 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1013 21:31:29.076664   26276 ssh_runner.go:195] Run: crio --version
	I1013 21:31:29.110153   26276 ssh_runner.go:195] Run: crio --version
	I1013 21:31:29.143471   26276 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1013 21:31:29.145044   26276 main.go:141] libmachine: (functional-613120) Calling .GetIP
	I1013 21:31:29.148262   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:31:29.148722   26276 main.go:141] libmachine: (functional-613120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:28:1e", ip: ""} in network mk-functional-613120: {Iface:virbr1 ExpiryTime:2025-10-13 22:27:59 +0000 UTC Type:0 Mac:52:54:00:9f:28:1e Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-613120 Clientid:01:52:54:00:9f:28:1e}
	I1013 21:31:29.148745   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined IP address 192.168.39.113 and MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:31:29.149032   26276 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1013 21:31:29.155580   26276 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1013 21:31:29.157089   26276 kubeadm.go:883] updating cluster {Name:functional-613120 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.1 ClusterName:functional-613120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.113 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 21:31:29.157228   26276 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 21:31:29.157293   26276 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 21:31:29.208616   26276 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 21:31:29.208626   26276 crio.go:433] Images already preloaded, skipping extraction
	I1013 21:31:29.208705   26276 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 21:31:29.250833   26276 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 21:31:29.250844   26276 cache_images.go:85] Images are preloaded, skipping loading
	I1013 21:31:29.250850   26276 kubeadm.go:934] updating node { 192.168.39.113 8441 v1.34.1 crio true true} ...
	I1013 21:31:29.250931   26276 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-613120 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.113
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-613120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 21:31:29.251006   26276 ssh_runner.go:195] Run: crio config
	I1013 21:31:29.301886   26276 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1013 21:31:29.301911   26276 cni.go:84] Creating CNI manager for ""
	I1013 21:31:29.301923   26276 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1013 21:31:29.301935   26276 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 21:31:29.301956   26276 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.113 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-613120 NodeName:functional-613120 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.113"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.113 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigO
pts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 21:31:29.302099   26276 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.113
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-613120"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.113"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.113"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 21:31:29.302170   26276 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 21:31:29.315616   26276 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 21:31:29.315684   26276 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 21:31:29.328457   26276 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1013 21:31:29.349953   26276 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 21:31:29.371819   26276 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2070 bytes)
	I1013 21:31:29.393809   26276 ssh_runner.go:195] Run: grep 192.168.39.113	control-plane.minikube.internal$ /etc/hosts
	I1013 21:31:29.398778   26276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 21:31:29.570473   26276 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 21:31:29.588926   26276 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120 for IP: 192.168.39.113
	I1013 21:31:29.588937   26276 certs.go:195] generating shared ca certs ...
	I1013 21:31:29.588951   26276 certs.go:227] acquiring lock for ca certs: {Name:mk571ab777fecb8fbabce6e1d2676cf4c099bd41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:31:29.589089   26276 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-15625/.minikube/ca.key
	I1013 21:31:29.589120   26276 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-15625/.minikube/proxy-client-ca.key
	I1013 21:31:29.589125   26276 certs.go:257] generating profile certs ...
	I1013 21:31:29.589216   26276 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/client.key
	I1013 21:31:29.589261   26276 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/apiserver.key.b3c6289d
	I1013 21:31:29.589292   26276 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/proxy-client.key
	I1013 21:31:29.589397   26276 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/19947.pem (1338 bytes)
	W1013 21:31:29.589420   26276 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-15625/.minikube/certs/19947_empty.pem, impossibly tiny 0 bytes
	I1013 21:31:29.589425   26276 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 21:31:29.589444   26276 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem (1078 bytes)
	I1013 21:31:29.589461   26276 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/cert.pem (1123 bytes)
	I1013 21:31:29.589477   26276 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/key.pem (1675 bytes)
	I1013 21:31:29.589510   26276 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/files/etc/ssl/certs/199472.pem (1708 bytes)
	I1013 21:31:29.590055   26276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 21:31:29.623552   26276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 21:31:29.654637   26276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 21:31:29.685504   26276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 21:31:29.716743   26276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1013 21:31:29.747883   26276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 21:31:29.779761   26276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 21:31:29.811055   26276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 21:31:29.842083   26276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/certs/19947.pem --> /usr/share/ca-certificates/19947.pem (1338 bytes)
	I1013 21:31:29.873747   26276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/files/etc/ssl/certs/199472.pem --> /usr/share/ca-certificates/199472.pem (1708 bytes)
	I1013 21:31:29.906062   26276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 21:31:29.937606   26276 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 21:31:29.959141   26276 ssh_runner.go:195] Run: openssl version
	I1013 21:31:29.966216   26276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19947.pem && ln -fs /usr/share/ca-certificates/19947.pem /etc/ssl/certs/19947.pem"
	I1013 21:31:29.980104   26276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19947.pem
	I1013 21:31:29.985859   26276 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:27 /usr/share/ca-certificates/19947.pem
	I1013 21:31:29.985911   26276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19947.pem
	I1013 21:31:29.994513   26276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19947.pem /etc/ssl/certs/51391683.0"
	I1013 21:31:30.007015   26276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/199472.pem && ln -fs /usr/share/ca-certificates/199472.pem /etc/ssl/certs/199472.pem"
	I1013 21:31:30.022278   26276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/199472.pem
	I1013 21:31:30.028218   26276 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:27 /usr/share/ca-certificates/199472.pem
	I1013 21:31:30.028283   26276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/199472.pem
	I1013 21:31:30.036418   26276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/199472.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 21:31:30.049741   26276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 21:31:30.063926   26276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:31:30.070173   26276 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 21:18 /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:31:30.070233   26276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:31:30.078206   26276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 21:31:30.091383   26276 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 21:31:30.097602   26276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1013 21:31:30.105375   26276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1013 21:31:30.113500   26276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1013 21:31:30.121703   26276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1013 21:31:30.129715   26276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1013 21:31:30.137334   26276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1013 21:31:30.144688   26276 kubeadm.go:400] StartCluster: {Name:functional-613120 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.1 ClusterName:functional-613120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.113 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:31:30.144760   26276 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:31:30.144830   26276 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:31:30.188942   26276 cri.go:89] found id: "cccca560ba245f49986c6f2272ebfe892f25d04e2e81fb10ba7f7102e95d4f15"
	I1013 21:31:30.188952   26276 cri.go:89] found id: "5acb4b6ac1a5d2a7093346c8e61251e661880c4d307588f204e6d925ab920c0a"
	I1013 21:31:30.188955   26276 cri.go:89] found id: "bcc1c82c55ee710c48ef5bdf25e3c630ae95ea6aefdaca9408754829efc78844"
	I1013 21:31:30.188958   26276 cri.go:89] found id: "203bfd3c7945723b3a1f274ccbd4c49bb08bebdfafdd3cfd3d0a62665ed76dac"
	I1013 21:31:30.188960   26276 cri.go:89] found id: "b44a46bca9e30d64bfbfd3eae943fafba4b4c1c92fb594f514a3bd091ffd0e87"
	I1013 21:31:30.188962   26276 cri.go:89] found id: "26fd846e4671c2a18eaac3936dd0fe9f0081cd0dcee0c249a8c39512f64eb976"
	I1013 21:31:30.188963   26276 cri.go:89] found id: "6ee4b6616f5b7b67eb758e46052afdb66608eb700a52ac609b016e101814cbc5"
	I1013 21:31:30.188964   26276 cri.go:89] found id: ""
	I1013 21:31:30.189013   26276 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-613120 -n functional-613120
helpers_test.go:269: (dbg) Run:  kubectl --context functional-613120 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestFunctional/serial/ExtraConfig FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/ExtraConfig (351.93s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (2.01s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-613120 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:848: kube-controller-manager is not Ready: {Phase:Running Conditions:[{Type:PodReadyToStartContainers Status:False} {Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:False} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.39.113 PodIP:192.168.39.113 StartTime:2025-10-13 21:31:32 +0000 UTC ContainerStatuses:[{Name:kube-controller-manager State:{Waiting:<nil> Running:<nil> Terminated:0xc0002ae230} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:1 Image:registry.k8s.io/kube-controller-manager:v1.34.1 ImageID:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f ContainerID:cri-o://6ee4b6616f5b7b67eb758e46052afdb66608eb700a52ac609b016e101814cbc5}]}
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-613120 -n functional-613120
helpers_test.go:252: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-613120 logs -n 25: (1.371749066s)
helpers_test.go:260: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                  ARGS                                                                   │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ unpause │ nospam-455348 --log_dir /tmp/nospam-455348 unpause                                                                                      │ nospam-455348     │ jenkins │ v1.37.0 │ 13 Oct 25 21:26 UTC │ 13 Oct 25 21:26 UTC │
	│ unpause │ nospam-455348 --log_dir /tmp/nospam-455348 unpause                                                                                      │ nospam-455348     │ jenkins │ v1.37.0 │ 13 Oct 25 21:26 UTC │ 13 Oct 25 21:26 UTC │
	│ unpause │ nospam-455348 --log_dir /tmp/nospam-455348 unpause                                                                                      │ nospam-455348     │ jenkins │ v1.37.0 │ 13 Oct 25 21:26 UTC │ 13 Oct 25 21:26 UTC │
	│ stop    │ nospam-455348 --log_dir /tmp/nospam-455348 stop                                                                                         │ nospam-455348     │ jenkins │ v1.37.0 │ 13 Oct 25 21:26 UTC │ 13 Oct 25 21:27 UTC │
	│ stop    │ nospam-455348 --log_dir /tmp/nospam-455348 stop                                                                                         │ nospam-455348     │ jenkins │ v1.37.0 │ 13 Oct 25 21:27 UTC │ 13 Oct 25 21:27 UTC │
	│ stop    │ nospam-455348 --log_dir /tmp/nospam-455348 stop                                                                                         │ nospam-455348     │ jenkins │ v1.37.0 │ 13 Oct 25 21:27 UTC │ 13 Oct 25 21:27 UTC │
	│ delete  │ -p nospam-455348                                                                                                                        │ nospam-455348     │ jenkins │ v1.37.0 │ 13 Oct 25 21:27 UTC │ 13 Oct 25 21:27 UTC │
	│ start   │ -p functional-613120 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:27 UTC │ 13 Oct 25 21:29 UTC │
	│ start   │ -p functional-613120 --alsologtostderr -v=8                                                                                             │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:29 UTC │ 13 Oct 25 21:29 UTC │
	│ cache   │ functional-613120 cache add registry.k8s.io/pause:3.1                                                                                   │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:29 UTC │ 13 Oct 25 21:29 UTC │
	│ cache   │ functional-613120 cache add registry.k8s.io/pause:3.3                                                                                   │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:29 UTC │ 13 Oct 25 21:29 UTC │
	│ cache   │ functional-613120 cache add registry.k8s.io/pause:latest                                                                                │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:29 UTC │ 13 Oct 25 21:29 UTC │
	│ cache   │ functional-613120 cache add minikube-local-cache-test:functional-613120                                                                 │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:29 UTC │ 13 Oct 25 21:29 UTC │
	│ cache   │ functional-613120 cache delete minikube-local-cache-test:functional-613120                                                              │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:29 UTC │ 13 Oct 25 21:29 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 13 Oct 25 21:29 UTC │ 13 Oct 25 21:29 UTC │
	│ cache   │ list                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 13 Oct 25 21:29 UTC │ 13 Oct 25 21:29 UTC │
	│ ssh     │ functional-613120 ssh sudo crictl images                                                                                                │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:29 UTC │ 13 Oct 25 21:29 UTC │
	│ ssh     │ functional-613120 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                      │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:29 UTC │ 13 Oct 25 21:29 UTC │
	│ ssh     │ functional-613120 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                 │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:29 UTC │                     │
	│ cache   │ functional-613120 cache reload                                                                                                          │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:29 UTC │ 13 Oct 25 21:29 UTC │
	│ ssh     │ functional-613120 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                 │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:29 UTC │ 13 Oct 25 21:29 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 13 Oct 25 21:29 UTC │ 13 Oct 25 21:29 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 13 Oct 25 21:29 UTC │ 13 Oct 25 21:29 UTC │
	│ kubectl │ functional-613120 kubectl -- --context functional-613120 get pods                                                                       │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:29 UTC │ 13 Oct 25 21:29 UTC │
	│ start   │ -p functional-613120 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:29 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 21:29:49
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 21:29:49.440815   26276 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:29:49.441076   26276 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:29:49.441080   26276 out.go:374] Setting ErrFile to fd 2...
	I1013 21:29:49.441084   26276 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:29:49.441341   26276 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-15625/.minikube/bin
	I1013 21:29:49.441755   26276 out.go:368] Setting JSON to false
	I1013 21:29:49.442698   26276 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4337,"bootTime":1760386652,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 21:29:49.442764   26276 start.go:141] virtualization: kvm guest
	I1013 21:29:49.444898   26276 out.go:179] * [functional-613120] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 21:29:49.446364   26276 notify.go:220] Checking for updates...
	I1013 21:29:49.446391   26276 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 21:29:49.447666   26276 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 21:29:49.448777   26276 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-15625/kubeconfig
	I1013 21:29:49.449957   26276 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-15625/.minikube
	I1013 21:29:49.451011   26276 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 21:29:49.452209   26276 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 21:29:49.453724   26276 config.go:182] Loaded profile config "functional-613120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:29:49.453796   26276 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 21:29:49.454257   26276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:29:49.454309   26276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:29:49.467644   26276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41343
	I1013 21:29:49.468202   26276 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:29:49.468707   26276 main.go:141] libmachine: Using API Version  1
	I1013 21:29:49.468729   26276 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:29:49.469085   26276 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:29:49.469326   26276 main.go:141] libmachine: (functional-613120) Calling .DriverName
	I1013 21:29:49.499670   26276 out.go:179] * Using the kvm2 driver based on existing profile
	I1013 21:29:49.501112   26276 start.go:305] selected driver: kvm2
	I1013 21:29:49.501121   26276 start.go:925] validating driver "kvm2" against &{Name:functional-613120 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-613120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.113 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:29:49.501247   26276 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 21:29:49.501550   26276 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 21:29:49.501615   26276 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-15625/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 21:29:49.515972   26276 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 21:29:49.515992   26276 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-15625/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 21:29:49.529247   26276 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 21:29:49.529919   26276 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 21:29:49.529939   26276 cni.go:84] Creating CNI manager for ""
	I1013 21:29:49.529993   26276 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1013 21:29:49.530039   26276 start.go:349] cluster config:
	{Name:functional-613120 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-613120 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.113 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:29:49.530129   26276 iso.go:125] acquiring lock: {Name:mkb744e09089d0ab8a5ae3294003cf719d380bf8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 21:29:49.532533   26276 out.go:179] * Starting "functional-613120" primary control-plane node in "functional-613120" cluster
	I1013 21:29:49.533706   26276 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 21:29:49.533732   26276 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-15625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1013 21:29:49.533737   26276 cache.go:58] Caching tarball of preloaded images
	I1013 21:29:49.533834   26276 preload.go:233] Found /home/jenkins/minikube-integration/21724-15625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1013 21:29:49.533841   26276 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 21:29:49.533920   26276 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/config.json ...
	I1013 21:29:49.534096   26276 start.go:360] acquireMachinesLock for functional-613120: {Name:mk81e7d45b6c30d879e4077cd05b64f26ced767a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1013 21:29:49.534133   26276 start.go:364] duration metric: took 24.968µs to acquireMachinesLock for "functional-613120"
	I1013 21:29:49.534143   26276 start.go:96] Skipping create...Using existing machine configuration
	I1013 21:29:49.534146   26276 fix.go:54] fixHost starting: 
	I1013 21:29:49.534425   26276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:29:49.534451   26276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:29:49.547203   26276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36567
	I1013 21:29:49.547582   26276 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:29:49.547943   26276 main.go:141] libmachine: Using API Version  1
	I1013 21:29:49.547964   26276 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:29:49.548347   26276 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:29:49.548595   26276 main.go:141] libmachine: (functional-613120) Calling .DriverName
	I1013 21:29:49.548758   26276 main.go:141] libmachine: (functional-613120) Calling .GetState
	I1013 21:29:49.550458   26276 fix.go:112] recreateIfNeeded on functional-613120: state=Running err=<nil>
	W1013 21:29:49.550478   26276 fix.go:138] unexpected machine state, will restart: <nil>
	I1013 21:29:49.552106   26276 out.go:252] * Updating the running kvm2 "functional-613120" VM ...
	I1013 21:29:49.552128   26276 machine.go:93] provisionDockerMachine start ...
	I1013 21:29:49.552138   26276 main.go:141] libmachine: (functional-613120) Calling .DriverName
	I1013 21:29:49.552324   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHHostname
	I1013 21:29:49.554836   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:49.555236   26276 main.go:141] libmachine: (functional-613120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:28:1e", ip: ""} in network mk-functional-613120: {Iface:virbr1 ExpiryTime:2025-10-13 22:27:59 +0000 UTC Type:0 Mac:52:54:00:9f:28:1e Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-613120 Clientid:01:52:54:00:9f:28:1e}
	I1013 21:29:49.555253   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined IP address 192.168.39.113 and MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:49.555381   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHPort
	I1013 21:29:49.555532   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHKeyPath
	I1013 21:29:49.555660   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHKeyPath
	I1013 21:29:49.555744   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHUsername
	I1013 21:29:49.555851   26276 main.go:141] libmachine: Using SSH client type: native
	I1013 21:29:49.556048   26276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I1013 21:29:49.556053   26276 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 21:29:49.674883   26276 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-613120
	
	I1013 21:29:49.674900   26276 main.go:141] libmachine: (functional-613120) Calling .GetMachineName
	I1013 21:29:49.675116   26276 buildroot.go:166] provisioning hostname "functional-613120"
	I1013 21:29:49.675133   26276 main.go:141] libmachine: (functional-613120) Calling .GetMachineName
	I1013 21:29:49.675325   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHHostname
	I1013 21:29:49.678135   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:49.678563   26276 main.go:141] libmachine: (functional-613120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:28:1e", ip: ""} in network mk-functional-613120: {Iface:virbr1 ExpiryTime:2025-10-13 22:27:59 +0000 UTC Type:0 Mac:52:54:00:9f:28:1e Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-613120 Clientid:01:52:54:00:9f:28:1e}
	I1013 21:29:49.678585   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined IP address 192.168.39.113 and MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:49.678720   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHPort
	I1013 21:29:49.678882   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHKeyPath
	I1013 21:29:49.679003   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHKeyPath
	I1013 21:29:49.679109   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHUsername
	I1013 21:29:49.679313   26276 main.go:141] libmachine: Using SSH client type: native
	I1013 21:29:49.679506   26276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I1013 21:29:49.679512   26276 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-613120 && echo "functional-613120" | sudo tee /etc/hostname
	I1013 21:29:49.815823   26276 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-613120
	
	I1013 21:29:49.815838   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHHostname
	I1013 21:29:49.818953   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:49.819476   26276 main.go:141] libmachine: (functional-613120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:28:1e", ip: ""} in network mk-functional-613120: {Iface:virbr1 ExpiryTime:2025-10-13 22:27:59 +0000 UTC Type:0 Mac:52:54:00:9f:28:1e Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-613120 Clientid:01:52:54:00:9f:28:1e}
	I1013 21:29:49.819488   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined IP address 192.168.39.113 and MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:49.819705   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHPort
	I1013 21:29:49.819911   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHKeyPath
	I1013 21:29:49.820075   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHKeyPath
	I1013 21:29:49.820240   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHUsername
	I1013 21:29:49.820363   26276 main.go:141] libmachine: Using SSH client type: native
	I1013 21:29:49.820542   26276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I1013 21:29:49.820551   26276 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-613120' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-613120/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-613120' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 21:29:49.938900   26276 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 21:29:49.938941   26276 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21724-15625/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-15625/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-15625/.minikube}
	I1013 21:29:49.938963   26276 buildroot.go:174] setting up certificates
	I1013 21:29:49.938981   26276 provision.go:84] configureAuth start
	I1013 21:29:49.938988   26276 main.go:141] libmachine: (functional-613120) Calling .GetMachineName
	I1013 21:29:49.939310   26276 main.go:141] libmachine: (functional-613120) Calling .GetIP
	I1013 21:29:49.942965   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:49.943473   26276 main.go:141] libmachine: (functional-613120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:28:1e", ip: ""} in network mk-functional-613120: {Iface:virbr1 ExpiryTime:2025-10-13 22:27:59 +0000 UTC Type:0 Mac:52:54:00:9f:28:1e Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-613120 Clientid:01:52:54:00:9f:28:1e}
	I1013 21:29:49.943496   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined IP address 192.168.39.113 and MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:49.943725   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHHostname
	I1013 21:29:49.946410   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:49.946746   26276 main.go:141] libmachine: (functional-613120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:28:1e", ip: ""} in network mk-functional-613120: {Iface:virbr1 ExpiryTime:2025-10-13 22:27:59 +0000 UTC Type:0 Mac:52:54:00:9f:28:1e Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-613120 Clientid:01:52:54:00:9f:28:1e}
	I1013 21:29:49.946757   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined IP address 192.168.39.113 and MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:49.946878   26276 provision.go:143] copyHostCerts
	I1013 21:29:49.946918   26276 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-15625/.minikube/ca.pem, removing ...
	I1013 21:29:49.946931   26276 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-15625/.minikube/ca.pem
	I1013 21:29:49.947007   26276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-15625/.minikube/ca.pem (1078 bytes)
	I1013 21:29:49.947111   26276 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-15625/.minikube/cert.pem, removing ...
	I1013 21:29:49.947114   26276 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-15625/.minikube/cert.pem
	I1013 21:29:49.947139   26276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-15625/.minikube/cert.pem (1123 bytes)
	I1013 21:29:49.947235   26276 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-15625/.minikube/key.pem, removing ...
	I1013 21:29:49.947239   26276 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-15625/.minikube/key.pem
	I1013 21:29:49.947267   26276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-15625/.minikube/key.pem (1675 bytes)
	I1013 21:29:49.947340   26276 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-15625/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca-key.pem org=jenkins.functional-613120 san=[127.0.0.1 192.168.39.113 functional-613120 localhost minikube]
	I1013 21:29:50.505214   26276 provision.go:177] copyRemoteCerts
	I1013 21:29:50.505256   26276 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 21:29:50.505275   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHHostname
	I1013 21:29:50.508239   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:50.508580   26276 main.go:141] libmachine: (functional-613120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:28:1e", ip: ""} in network mk-functional-613120: {Iface:virbr1 ExpiryTime:2025-10-13 22:27:59 +0000 UTC Type:0 Mac:52:54:00:9f:28:1e Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-613120 Clientid:01:52:54:00:9f:28:1e}
	I1013 21:29:50.508601   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined IP address 192.168.39.113 and MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:50.508823   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHPort
	I1013 21:29:50.509037   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHKeyPath
	I1013 21:29:50.509276   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHUsername
	I1013 21:29:50.509453   26276 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/functional-613120/id_rsa Username:docker}
	I1013 21:29:50.601983   26276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1013 21:29:50.637861   26276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1013 21:29:50.670340   26276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1013 21:29:50.701779   26276 provision.go:87] duration metric: took 762.786839ms to configureAuth
	I1013 21:29:50.701796   26276 buildroot.go:189] setting minikube options for container-runtime
	I1013 21:29:50.701960   26276 config.go:182] Loaded profile config "functional-613120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:29:50.702027   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHHostname
	I1013 21:29:50.704884   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:50.705260   26276 main.go:141] libmachine: (functional-613120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:28:1e", ip: ""} in network mk-functional-613120: {Iface:virbr1 ExpiryTime:2025-10-13 22:27:59 +0000 UTC Type:0 Mac:52:54:00:9f:28:1e Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-613120 Clientid:01:52:54:00:9f:28:1e}
	I1013 21:29:50.705271   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined IP address 192.168.39.113 and MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:50.705442   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHPort
	I1013 21:29:50.705663   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHKeyPath
	I1013 21:29:50.705827   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHKeyPath
	I1013 21:29:50.705977   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHUsername
	I1013 21:29:50.706141   26276 main.go:141] libmachine: Using SSH client type: native
	I1013 21:29:50.706368   26276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I1013 21:29:50.706376   26276 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 21:29:56.406555   26276 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 21:29:56.406569   26276 machine.go:96] duration metric: took 6.854435483s to provisionDockerMachine
	I1013 21:29:56.406579   26276 start.go:293] postStartSetup for "functional-613120" (driver="kvm2")
	I1013 21:29:56.406587   26276 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 21:29:56.406600   26276 main.go:141] libmachine: (functional-613120) Calling .DriverName
	I1013 21:29:56.406931   26276 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 21:29:56.406992   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHHostname
	I1013 21:29:56.409790   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:56.410210   26276 main.go:141] libmachine: (functional-613120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:28:1e", ip: ""} in network mk-functional-613120: {Iface:virbr1 ExpiryTime:2025-10-13 22:27:59 +0000 UTC Type:0 Mac:52:54:00:9f:28:1e Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-613120 Clientid:01:52:54:00:9f:28:1e}
	I1013 21:29:56.410233   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined IP address 192.168.39.113 and MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:56.410372   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHPort
	I1013 21:29:56.410581   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHKeyPath
	I1013 21:29:56.410732   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHUsername
	I1013 21:29:56.410909   26276 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/functional-613120/id_rsa Username:docker}
	I1013 21:29:56.500379   26276 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 21:29:56.505569   26276 info.go:137] Remote host: Buildroot 2025.02
	I1013 21:29:56.505594   26276 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-15625/.minikube/addons for local assets ...
	I1013 21:29:56.505660   26276 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-15625/.minikube/files for local assets ...
	I1013 21:29:56.505766   26276 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-15625/.minikube/files/etc/ssl/certs/199472.pem -> 199472.pem in /etc/ssl/certs
	I1013 21:29:56.505830   26276 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-15625/.minikube/files/etc/test/nested/copy/19947/hosts -> hosts in /etc/test/nested/copy/19947
	I1013 21:29:56.505860   26276 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/19947
	I1013 21:29:56.518395   26276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/files/etc/ssl/certs/199472.pem --> /etc/ssl/certs/199472.pem (1708 bytes)
	I1013 21:29:56.554366   26276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/files/etc/test/nested/copy/19947/hosts --> /etc/test/nested/copy/19947/hosts (40 bytes)
	I1013 21:29:56.590660   26276 start.go:296] duration metric: took 184.067966ms for postStartSetup
	I1013 21:29:56.590689   26276 fix.go:56] duration metric: took 7.056542663s for fixHost
	I1013 21:29:56.590706   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHHostname
	I1013 21:29:56.593883   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:56.594280   26276 main.go:141] libmachine: (functional-613120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:28:1e", ip: ""} in network mk-functional-613120: {Iface:virbr1 ExpiryTime:2025-10-13 22:27:59 +0000 UTC Type:0 Mac:52:54:00:9f:28:1e Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-613120 Clientid:01:52:54:00:9f:28:1e}
	I1013 21:29:56.594319   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined IP address 192.168.39.113 and MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:56.594495   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHPort
	I1013 21:29:56.594700   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHKeyPath
	I1013 21:29:56.594867   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHKeyPath
	I1013 21:29:56.595050   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHUsername
	I1013 21:29:56.595241   26276 main.go:141] libmachine: Using SSH client type: native
	I1013 21:29:56.595431   26276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.113 22 <nil> <nil>}
	I1013 21:29:56.595435   26276 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1013 21:29:56.716086   26276 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760390996.709359886
	
	I1013 21:29:56.716097   26276 fix.go:216] guest clock: 1760390996.709359886
	I1013 21:29:56.716104   26276 fix.go:229] Guest: 2025-10-13 21:29:56.709359886 +0000 UTC Remote: 2025-10-13 21:29:56.590691256 +0000 UTC m=+7.190574892 (delta=118.66863ms)
	I1013 21:29:56.716122   26276 fix.go:200] guest clock delta is within tolerance: 118.66863ms
	I1013 21:29:56.716126   26276 start.go:83] releasing machines lock for "functional-613120", held for 7.181988331s
	I1013 21:29:56.716141   26276 main.go:141] libmachine: (functional-613120) Calling .DriverName
	I1013 21:29:56.716396   26276 main.go:141] libmachine: (functional-613120) Calling .GetIP
	I1013 21:29:56.719177   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:56.719606   26276 main.go:141] libmachine: (functional-613120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:28:1e", ip: ""} in network mk-functional-613120: {Iface:virbr1 ExpiryTime:2025-10-13 22:27:59 +0000 UTC Type:0 Mac:52:54:00:9f:28:1e Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-613120 Clientid:01:52:54:00:9f:28:1e}
	I1013 21:29:56.719630   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined IP address 192.168.39.113 and MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:56.719802   26276 main.go:141] libmachine: (functional-613120) Calling .DriverName
	I1013 21:29:56.720346   26276 main.go:141] libmachine: (functional-613120) Calling .DriverName
	I1013 21:29:56.720498   26276 main.go:141] libmachine: (functional-613120) Calling .DriverName
	I1013 21:29:56.720600   26276 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 21:29:56.720630   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHHostname
	I1013 21:29:56.720682   26276 ssh_runner.go:195] Run: cat /version.json
	I1013 21:29:56.720696   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHHostname
	I1013 21:29:56.723704   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:56.723737   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:56.724063   26276 main.go:141] libmachine: (functional-613120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:28:1e", ip: ""} in network mk-functional-613120: {Iface:virbr1 ExpiryTime:2025-10-13 22:27:59 +0000 UTC Type:0 Mac:52:54:00:9f:28:1e Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-613120 Clientid:01:52:54:00:9f:28:1e}
	I1013 21:29:56.724089   26276 main.go:141] libmachine: (functional-613120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:28:1e", ip: ""} in network mk-functional-613120: {Iface:virbr1 ExpiryTime:2025-10-13 22:27:59 +0000 UTC Type:0 Mac:52:54:00:9f:28:1e Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-613120 Clientid:01:52:54:00:9f:28:1e}
	I1013 21:29:56.724110   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined IP address 192.168.39.113 and MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:56.724231   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined IP address 192.168.39.113 and MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:29:56.724316   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHPort
	I1013 21:29:56.724506   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHKeyPath
	I1013 21:29:56.724603   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHPort
	I1013 21:29:56.724669   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHUsername
	I1013 21:29:56.724737   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHKeyPath
	I1013 21:29:56.724796   26276 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/functional-613120/id_rsa Username:docker}
	I1013 21:29:56.724843   26276 main.go:141] libmachine: (functional-613120) Calling .GetSSHUsername
	I1013 21:29:56.724963   26276 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/functional-613120/id_rsa Username:docker}
	I1013 21:29:56.870143   26276 ssh_runner.go:195] Run: systemctl --version
	I1013 21:29:56.923214   26276 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 21:29:57.180062   26276 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 21:29:57.199701   26276 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 21:29:57.199766   26276 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 21:29:57.224345   26276 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1013 21:29:57.224360   26276 start.go:495] detecting cgroup driver to use...
	I1013 21:29:57.224421   26276 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 21:29:57.265666   26276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 21:29:57.310498   26276 docker.go:218] disabling cri-docker service (if available) ...
	I1013 21:29:57.310560   26276 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 21:29:57.350114   26276 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 21:29:57.380389   26276 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 21:29:57.644994   26276 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 21:29:57.837112   26276 docker.go:234] disabling docker service ...
	I1013 21:29:57.837182   26276 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 21:29:57.866375   26276 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 21:29:57.892095   26276 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 21:29:58.129135   26276 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 21:29:58.311135   26276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 21:29:58.328458   26276 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 21:29:58.354029   26276 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 21:29:58.354086   26276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:29:58.367309   26276 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 21:29:58.367358   26276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:29:58.380815   26276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:29:58.395036   26276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:29:58.408251   26276 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 21:29:58.423198   26276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:29:58.436621   26276 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:29:58.450843   26276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:29:58.463563   26276 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 21:29:58.474692   26276 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 21:29:58.487040   26276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 21:29:58.665117   26276 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 21:31:29.023052   26276 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.357908146s)
	I1013 21:31:29.023102   26276 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 21:31:29.023180   26276 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 21:31:29.029455   26276 start.go:563] Will wait 60s for crictl version
	I1013 21:31:29.029504   26276 ssh_runner.go:195] Run: which crictl
	I1013 21:31:29.034464   26276 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1013 21:31:29.076608   26276 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1013 21:31:29.076664   26276 ssh_runner.go:195] Run: crio --version
	I1013 21:31:29.110153   26276 ssh_runner.go:195] Run: crio --version
	I1013 21:31:29.143471   26276 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1013 21:31:29.145044   26276 main.go:141] libmachine: (functional-613120) Calling .GetIP
	I1013 21:31:29.148262   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:31:29.148722   26276 main.go:141] libmachine: (functional-613120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:28:1e", ip: ""} in network mk-functional-613120: {Iface:virbr1 ExpiryTime:2025-10-13 22:27:59 +0000 UTC Type:0 Mac:52:54:00:9f:28:1e Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-613120 Clientid:01:52:54:00:9f:28:1e}
	I1013 21:31:29.148745   26276 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined IP address 192.168.39.113 and MAC address 52:54:00:9f:28:1e in network mk-functional-613120
	I1013 21:31:29.149032   26276 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1013 21:31:29.155580   26276 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1013 21:31:29.157089   26276 kubeadm.go:883] updating cluster {Name:functional-613120 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.1 ClusterName:functional-613120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.113 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 21:31:29.157228   26276 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 21:31:29.157293   26276 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 21:31:29.208616   26276 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 21:31:29.208626   26276 crio.go:433] Images already preloaded, skipping extraction
	I1013 21:31:29.208705   26276 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 21:31:29.250833   26276 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 21:31:29.250844   26276 cache_images.go:85] Images are preloaded, skipping loading
	I1013 21:31:29.250850   26276 kubeadm.go:934] updating node { 192.168.39.113 8441 v1.34.1 crio true true} ...
	I1013 21:31:29.250931   26276 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-613120 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.113
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-613120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 21:31:29.251006   26276 ssh_runner.go:195] Run: crio config
	I1013 21:31:29.301886   26276 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1013 21:31:29.301911   26276 cni.go:84] Creating CNI manager for ""
	I1013 21:31:29.301923   26276 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1013 21:31:29.301935   26276 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 21:31:29.301956   26276 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.113 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-613120 NodeName:functional-613120 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.113"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.113 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigO
pts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 21:31:29.302099   26276 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.113
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-613120"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.113"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.113"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 21:31:29.302170   26276 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 21:31:29.315616   26276 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 21:31:29.315684   26276 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 21:31:29.328457   26276 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1013 21:31:29.349953   26276 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 21:31:29.371819   26276 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2070 bytes)
	I1013 21:31:29.393809   26276 ssh_runner.go:195] Run: grep 192.168.39.113	control-plane.minikube.internal$ /etc/hosts
	I1013 21:31:29.398778   26276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 21:31:29.570473   26276 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 21:31:29.588926   26276 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120 for IP: 192.168.39.113
	I1013 21:31:29.588937   26276 certs.go:195] generating shared ca certs ...
	I1013 21:31:29.588951   26276 certs.go:227] acquiring lock for ca certs: {Name:mk571ab777fecb8fbabce6e1d2676cf4c099bd41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:31:29.589089   26276 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-15625/.minikube/ca.key
	I1013 21:31:29.589120   26276 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-15625/.minikube/proxy-client-ca.key
	I1013 21:31:29.589125   26276 certs.go:257] generating profile certs ...
	I1013 21:31:29.589216   26276 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/client.key
	I1013 21:31:29.589261   26276 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/apiserver.key.b3c6289d
	I1013 21:31:29.589292   26276 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/proxy-client.key
	I1013 21:31:29.589397   26276 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/19947.pem (1338 bytes)
	W1013 21:31:29.589420   26276 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-15625/.minikube/certs/19947_empty.pem, impossibly tiny 0 bytes
	I1013 21:31:29.589425   26276 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 21:31:29.589444   26276 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem (1078 bytes)
	I1013 21:31:29.589461   26276 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/cert.pem (1123 bytes)
	I1013 21:31:29.589477   26276 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/key.pem (1675 bytes)
	I1013 21:31:29.589510   26276 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/files/etc/ssl/certs/199472.pem (1708 bytes)
	I1013 21:31:29.590055   26276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 21:31:29.623552   26276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 21:31:29.654637   26276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 21:31:29.685504   26276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 21:31:29.716743   26276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1013 21:31:29.747883   26276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 21:31:29.779761   26276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 21:31:29.811055   26276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 21:31:29.842083   26276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/certs/19947.pem --> /usr/share/ca-certificates/19947.pem (1338 bytes)
	I1013 21:31:29.873747   26276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/files/etc/ssl/certs/199472.pem --> /usr/share/ca-certificates/199472.pem (1708 bytes)
	I1013 21:31:29.906062   26276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 21:31:29.937606   26276 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 21:31:29.959141   26276 ssh_runner.go:195] Run: openssl version
	I1013 21:31:29.966216   26276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19947.pem && ln -fs /usr/share/ca-certificates/19947.pem /etc/ssl/certs/19947.pem"
	I1013 21:31:29.980104   26276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19947.pem
	I1013 21:31:29.985859   26276 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:27 /usr/share/ca-certificates/19947.pem
	I1013 21:31:29.985911   26276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19947.pem
	I1013 21:31:29.994513   26276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19947.pem /etc/ssl/certs/51391683.0"
	I1013 21:31:30.007015   26276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/199472.pem && ln -fs /usr/share/ca-certificates/199472.pem /etc/ssl/certs/199472.pem"
	I1013 21:31:30.022278   26276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/199472.pem
	I1013 21:31:30.028218   26276 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:27 /usr/share/ca-certificates/199472.pem
	I1013 21:31:30.028283   26276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/199472.pem
	I1013 21:31:30.036418   26276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/199472.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 21:31:30.049741   26276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 21:31:30.063926   26276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:31:30.070173   26276 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 21:18 /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:31:30.070233   26276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:31:30.078206   26276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 21:31:30.091383   26276 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 21:31:30.097602   26276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1013 21:31:30.105375   26276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1013 21:31:30.113500   26276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1013 21:31:30.121703   26276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1013 21:31:30.129715   26276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1013 21:31:30.137334   26276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1013 21:31:30.144688   26276 kubeadm.go:400] StartCluster: {Name:functional-613120 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.1 ClusterName:functional-613120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.113 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:31:30.144760   26276 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:31:30.144830   26276 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:31:30.188942   26276 cri.go:89] found id: "cccca560ba245f49986c6f2272ebfe892f25d04e2e81fb10ba7f7102e95d4f15"
	I1013 21:31:30.188952   26276 cri.go:89] found id: "5acb4b6ac1a5d2a7093346c8e61251e661880c4d307588f204e6d925ab920c0a"
	I1013 21:31:30.188955   26276 cri.go:89] found id: "bcc1c82c55ee710c48ef5bdf25e3c630ae95ea6aefdaca9408754829efc78844"
	I1013 21:31:30.188958   26276 cri.go:89] found id: "203bfd3c7945723b3a1f274ccbd4c49bb08bebdfafdd3cfd3d0a62665ed76dac"
	I1013 21:31:30.188960   26276 cri.go:89] found id: "b44a46bca9e30d64bfbfd3eae943fafba4b4c1c92fb594f514a3bd091ffd0e87"
	I1013 21:31:30.188962   26276 cri.go:89] found id: "26fd846e4671c2a18eaac3936dd0fe9f0081cd0dcee0c249a8c39512f64eb976"
	I1013 21:31:30.188963   26276 cri.go:89] found id: "6ee4b6616f5b7b67eb758e46052afdb66608eb700a52ac609b016e101814cbc5"
	I1013 21:31:30.188964   26276 cri.go:89] found id: ""
	I1013 21:31:30.189013   26276 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-613120 -n functional-613120
helpers_test.go:269: (dbg) Run:  kubectl --context functional-613120 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestFunctional/serial/ComponentHealth FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/ComponentHealth (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-613120 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-613120 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-613120 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-613120 --alsologtostderr -v=1] stderr:
I1013 21:35:51.648869   28223 out.go:360] Setting OutFile to fd 1 ...
I1013 21:35:51.649169   28223 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 21:35:51.649181   28223 out.go:374] Setting ErrFile to fd 2...
I1013 21:35:51.649187   28223 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 21:35:51.649399   28223 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-15625/.minikube/bin
I1013 21:35:51.649649   28223 mustload.go:65] Loading cluster: functional-613120
I1013 21:35:51.649973   28223 config.go:182] Loaded profile config "functional-613120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 21:35:51.650364   28223 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1013 21:35:51.650427   28223 main.go:141] libmachine: Launching plugin server for driver kvm2
I1013 21:35:51.664107   28223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44955
I1013 21:35:51.664521   28223 main.go:141] libmachine: () Calling .GetVersion
I1013 21:35:51.664985   28223 main.go:141] libmachine: Using API Version  1
I1013 21:35:51.665011   28223 main.go:141] libmachine: () Calling .SetConfigRaw
I1013 21:35:51.665509   28223 main.go:141] libmachine: () Calling .GetMachineName
I1013 21:35:51.665701   28223 main.go:141] libmachine: (functional-613120) Calling .GetState
I1013 21:35:51.667514   28223 host.go:66] Checking if "functional-613120" exists ...
I1013 21:35:51.667918   28223 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1013 21:35:51.667964   28223 main.go:141] libmachine: Launching plugin server for driver kvm2
I1013 21:35:51.681468   28223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37795
I1013 21:35:51.681842   28223 main.go:141] libmachine: () Calling .GetVersion
I1013 21:35:51.682280   28223 main.go:141] libmachine: Using API Version  1
I1013 21:35:51.682300   28223 main.go:141] libmachine: () Calling .SetConfigRaw
I1013 21:35:51.682643   28223 main.go:141] libmachine: () Calling .GetMachineName
I1013 21:35:51.682867   28223 main.go:141] libmachine: (functional-613120) Calling .DriverName
I1013 21:35:51.683045   28223 api_server.go:166] Checking apiserver status ...
I1013 21:35:51.683105   28223 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1013 21:35:51.683140   28223 main.go:141] libmachine: (functional-613120) Calling .GetSSHHostname
I1013 21:35:51.686344   28223 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined MAC address 52:54:00:9f:28:1e in network mk-functional-613120
I1013 21:35:51.686803   28223 main.go:141] libmachine: (functional-613120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:28:1e", ip: ""} in network mk-functional-613120: {Iface:virbr1 ExpiryTime:2025-10-13 22:27:59 +0000 UTC Type:0 Mac:52:54:00:9f:28:1e Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-613120 Clientid:01:52:54:00:9f:28:1e}
I1013 21:35:51.686856   28223 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined IP address 192.168.39.113 and MAC address 52:54:00:9f:28:1e in network mk-functional-613120
I1013 21:35:51.686944   28223 main.go:141] libmachine: (functional-613120) Calling .GetSSHPort
I1013 21:35:51.687122   28223 main.go:141] libmachine: (functional-613120) Calling .GetSSHKeyPath
I1013 21:35:51.687284   28223 main.go:141] libmachine: (functional-613120) Calling .GetSSHUsername
I1013 21:35:51.687466   28223 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/functional-613120/id_rsa Username:docker}
I1013 21:35:51.810962   28223 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6126/cgroup
W1013 21:35:51.832243   28223 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/6126/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1013 21:35:51.832302   28223 ssh_runner.go:195] Run: ls
I1013 21:35:51.840038   28223 api_server.go:253] Checking apiserver healthz at https://192.168.39.113:8441/healthz ...
I1013 21:35:51.847421   28223 api_server.go:279] https://192.168.39.113:8441/healthz returned 200:
ok
W1013 21:35:51.847465   28223 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1013 21:35:51.847645   28223 config.go:182] Loaded profile config "functional-613120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 21:35:51.847655   28223 addons.go:69] Setting dashboard=true in profile "functional-613120"
I1013 21:35:51.847664   28223 addons.go:238] Setting addon dashboard=true in "functional-613120"
I1013 21:35:51.847697   28223 host.go:66] Checking if "functional-613120" exists ...
I1013 21:35:51.848003   28223 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1013 21:35:51.848051   28223 main.go:141] libmachine: Launching plugin server for driver kvm2
I1013 21:35:51.862254   28223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38633
I1013 21:35:51.862727   28223 main.go:141] libmachine: () Calling .GetVersion
I1013 21:35:51.863150   28223 main.go:141] libmachine: Using API Version  1
I1013 21:35:51.863203   28223 main.go:141] libmachine: () Calling .SetConfigRaw
I1013 21:35:51.863639   28223 main.go:141] libmachine: () Calling .GetMachineName
I1013 21:35:51.864278   28223 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1013 21:35:51.864326   28223 main.go:141] libmachine: Launching plugin server for driver kvm2
I1013 21:35:51.878065   28223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33923
I1013 21:35:51.878510   28223 main.go:141] libmachine: () Calling .GetVersion
I1013 21:35:51.878995   28223 main.go:141] libmachine: Using API Version  1
I1013 21:35:51.879013   28223 main.go:141] libmachine: () Calling .SetConfigRaw
I1013 21:35:51.879551   28223 main.go:141] libmachine: () Calling .GetMachineName
I1013 21:35:51.879811   28223 main.go:141] libmachine: (functional-613120) Calling .GetState
I1013 21:35:51.882065   28223 main.go:141] libmachine: (functional-613120) Calling .DriverName
I1013 21:35:51.884654   28223 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1013 21:35:51.886338   28223 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1013 21:35:51.887692   28223 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1013 21:35:51.887712   28223 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1013 21:35:51.887731   28223 main.go:141] libmachine: (functional-613120) Calling .GetSSHHostname
I1013 21:35:51.890877   28223 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined MAC address 52:54:00:9f:28:1e in network mk-functional-613120
I1013 21:35:51.891406   28223 main.go:141] libmachine: (functional-613120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:28:1e", ip: ""} in network mk-functional-613120: {Iface:virbr1 ExpiryTime:2025-10-13 22:27:59 +0000 UTC Type:0 Mac:52:54:00:9f:28:1e Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-613120 Clientid:01:52:54:00:9f:28:1e}
I1013 21:35:51.891450   28223 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined IP address 192.168.39.113 and MAC address 52:54:00:9f:28:1e in network mk-functional-613120
I1013 21:35:51.891589   28223 main.go:141] libmachine: (functional-613120) Calling .GetSSHPort
I1013 21:35:51.891774   28223 main.go:141] libmachine: (functional-613120) Calling .GetSSHKeyPath
I1013 21:35:51.891992   28223 main.go:141] libmachine: (functional-613120) Calling .GetSSHUsername
I1013 21:35:51.892129   28223 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/functional-613120/id_rsa Username:docker}
I1013 21:35:52.004757   28223 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1013 21:35:52.004780   28223 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1013 21:35:52.032823   28223 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1013 21:35:52.032846   28223 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1013 21:35:52.065108   28223 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1013 21:35:52.065130   28223 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1013 21:35:52.098551   28223 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1013 21:35:52.098572   28223 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1013 21:35:52.125322   28223 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I1013 21:35:52.125350   28223 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1013 21:35:52.151481   28223 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1013 21:35:52.151504   28223 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1013 21:35:52.179139   28223 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1013 21:35:52.179178   28223 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1013 21:35:52.203321   28223 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1013 21:35:52.203342   28223 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1013 21:35:52.226015   28223 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1013 21:35:52.226040   28223 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1013 21:35:52.249176   28223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1013 21:35:52.922360   28223 main.go:141] libmachine: Making call to close driver server
I1013 21:35:52.922391   28223 main.go:141] libmachine: (functional-613120) Calling .Close
I1013 21:35:52.922721   28223 main.go:141] libmachine: Successfully made call to close driver server
I1013 21:35:52.922738   28223 main.go:141] libmachine: Making call to close connection to plugin binary
I1013 21:35:52.922747   28223 main.go:141] libmachine: Making call to close driver server
I1013 21:35:52.922753   28223 main.go:141] libmachine: (functional-613120) Calling .Close
I1013 21:35:52.922970   28223 main.go:141] libmachine: Successfully made call to close driver server
I1013 21:35:52.922988   28223 main.go:141] libmachine: Making call to close connection to plugin binary
I1013 21:35:52.923009   28223 main.go:141] libmachine: (functional-613120) DBG | Closing plugin on server side
I1013 21:35:52.924859   28223 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-613120 addons enable metrics-server

                                                
                                                
I1013 21:35:52.926055   28223 addons.go:201] Writing out "functional-613120" config to set dashboard=true...
W1013 21:35:52.926295   28223 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1013 21:35:52.926934   28223 kapi.go:59] client config for functional-613120: &rest.Config{Host:"https://192.168.39.113:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/client.crt", KeyFile:"/home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/client.key", CAFile:"/home/jenkins/minikube-integration/21724-15625/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1013 21:35:52.927392   28223 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1013 21:35:52.927417   28223 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1013 21:35:52.927427   28223 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1013 21:35:52.927434   28223 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1013 21:35:52.927438   28223 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1013 21:35:52.937916   28223 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  cbdc722e-953f-45ff-bf59-60d6731a9144 842 0 2025-10-13 21:35:52 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-10-13 21:35:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.96.45.50,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.96.45.50],IPFamilies:[IPv4],AllocateLoadBalancerNod
ePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1013 21:35:52.938064   28223 out.go:285] * Launching proxy ...
* Launching proxy ...
I1013 21:35:52.938130   28223 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-613120 proxy --port 36195]
I1013 21:35:52.938464   28223 dashboard.go:157] Waiting for kubectl to output host:port ...
I1013 21:35:52.983296   28223 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W1013 21:35:52.983332   28223 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1013 21:35:52.994399   28223 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:404 Not Found StatusCode:404 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b972f3f7-bf6b-460f-b5d5-db676a457e51] Cache-Control:[no-cache, private] Content-Length:[216] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 21:35:52 GMT]] Body:0xc0007f24c0 ContentLength:216 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001caf00 TLS:<nil>}
I1013 21:35:52.994476   28223 retry.go:31] will retry after 116.577µs: Temporary Error: unexpected response code: 404
I1013 21:35:52.998469   28223 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:404 Not Found StatusCode:404 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a34b6370-2bee-42a9-b97d-4f103468f504] Cache-Control:[no-cache, private] Content-Length:[216] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 21:35:52 GMT]] Body:0xc0003893c0 ContentLength:216 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e8dc0 TLS:<nil>}
I1013 21:35:52.998532   28223 retry.go:31] will retry after 114.48µs: Temporary Error: unexpected response code: 404
I1013 21:35:53.002380   28223 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:404 Not Found StatusCode:404 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2b64b24a-5984-490f-a987-da1f3a07e65d] Cache-Control:[no-cache, private] Content-Length:[216] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 21:35:52 GMT]] Body:0xc0007f2600 ContentLength:216 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cb040 TLS:<nil>}
I1013 21:35:53.002437   28223 retry.go:31] will retry after 185.896µs: Temporary Error: unexpected response code: 404
I1013 21:35:53.006376   28223 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:404 Not Found StatusCode:404 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fd13b7c3-c272-4967-8fde-5c8af1e0c985] Cache-Control:[no-cache, private] Content-Length:[216] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 21:35:52 GMT]] Body:0xc000bfa140 ContentLength:216 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e8f00 TLS:<nil>}
I1013 21:35:53.006433   28223 retry.go:31] will retry after 241.565µs: Temporary Error: unexpected response code: 404
I1013 21:35:53.010895   28223 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:404 Not Found StatusCode:404 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f5dc9c15-c030-4937-a241-a10967ae451e] Cache-Control:[no-cache, private] Content-Length:[216] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 21:35:52 GMT]] Body:0xc0003894c0 ContentLength:216 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004bc640 TLS:<nil>}
I1013 21:35:53.010951   28223 retry.go:31] will retry after 297.759µs: Temporary Error: unexpected response code: 404
I1013 21:35:53.015357   28223 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:404 Not Found StatusCode:404 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7350eef5-0fa1-4766-a5a4-627f497ca12f] Cache-Control:[no-cache, private] Content-Length:[216] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 21:35:52 GMT]] Body:0xc0007f2700 ContentLength:216 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cb180 TLS:<nil>}
I1013 21:35:53.015402   28223 retry.go:31] will retry after 1.07858ms: Temporary Error: unexpected response code: 404
I1013 21:35:53.018711   28223 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:404 Not Found StatusCode:404 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[81d7875e-4636-45a6-bdc3-e39dccac19d1] Cache-Control:[no-cache, private] Content-Length:[216] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 21:35:53 GMT]] Body:0xc0003895c0 ContentLength:216 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e9040 TLS:<nil>}
I1013 21:35:53.018781   28223 retry.go:31] will retry after 1.118246ms: Temporary Error: unexpected response code: 404
I1013 21:35:53.024472   28223 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:404 Not Found StatusCode:404 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[53a171a8-7009-4ea3-a991-874a59f67408] Cache-Control:[no-cache, private] Content-Length:[216] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 21:35:53 GMT]] Body:0xc0007f2800 ContentLength:216 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cb2c0 TLS:<nil>}
I1013 21:35:53.024522   28223 retry.go:31] will retry after 1.847725ms: Temporary Error: unexpected response code: 404
I1013 21:35:53.030337   28223 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:404 Not Found StatusCode:404 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[59f60cd0-e441-4b42-b226-1d15422119b8] Cache-Control:[no-cache, private] Content-Length:[216] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 21:35:53 GMT]] Body:0xc000bfa280 ContentLength:216 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e9180 TLS:<nil>}
I1013 21:35:53.030388   28223 retry.go:31] will retry after 1.908355ms: Temporary Error: unexpected response code: 404
I1013 21:35:53.035134   28223 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:404 Not Found StatusCode:404 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7e0450fc-7fa4-41ce-be11-fed758f6b582] Cache-Control:[no-cache, private] Content-Length:[216] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 21:35:53 GMT]] Body:0xc0007f2900 ContentLength:216 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004bc780 TLS:<nil>}
I1013 21:35:53.035205   28223 retry.go:31] will retry after 1.930144ms: Temporary Error: unexpected response code: 404
I1013 21:35:53.040979   28223 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:404 Not Found StatusCode:404 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ae51e3f5-c309-4e8e-886d-e73cc2065731] Cache-Control:[no-cache, private] Content-Length:[216] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 21:35:53 GMT]] Body:0xc000bfa380 ContentLength:216 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002e92c0 TLS:<nil>}
I1013 21:35:53.041029   28223 retry.go:31] will retry after 7.299904ms: Temporary Error: unexpected response code: 404
I1013 21:35:53.052299   28223 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:404 Not Found StatusCode:404 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b284bcb7-deaf-4897-a914-0bc6804c8386] Cache-Control:[no-cache, private] Content-Length:[216] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 21:35:53 GMT]] Body:0xc0003896c0 ContentLength:216 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004bc8c0 TLS:<nil>}
I1013 21:35:53.052340   28223 retry.go:31] will retry after 10.117438ms: Temporary Error: unexpected response code: 404
I1013 21:35:53.067374   28223 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:404 Not Found StatusCode:404 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a8f69684-6acf-4298-88dd-96b9625b6f2d] Cache-Control:[no-cache, private] Content-Length:[216] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 21:35:53 GMT]] Body:0xc000bfa440 ContentLength:216 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cb400 TLS:<nil>}
I1013 21:35:53.067421   28223 retry.go:31] will retry after 17.215747ms: Temporary Error: unexpected response code: 404
I1013 21:35:53.089005   28223 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:404 Not Found StatusCode:404 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e9956945-557f-4aa1-84b9-4d2a64bc0939] Cache-Control:[no-cache, private] Content-Length:[216] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 21:35:53 GMT]] Body:0xc0003897c0 ContentLength:216 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004bca00 TLS:<nil>}
I1013 21:35:53.089071   28223 retry.go:31] will retry after 21.973187ms: Temporary Error: unexpected response code: 404
I1013 21:35:53.114764   28223 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:404 Not Found StatusCode:404 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e315cd08-f0e4-4258-af4c-dfa444f5b34a] Cache-Control:[no-cache, private] Content-Length:[216] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 21:35:53 GMT]] Body:0xc000bfa540 ContentLength:216 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cb540 TLS:<nil>}
I1013 21:35:53.114813   28223 retry.go:31] will retry after 30.653253ms: Temporary Error: unexpected response code: 404
I1013 21:35:53.149558   28223 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:404 Not Found StatusCode:404 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f9f47833-55cf-400c-8c5a-ca33b9754a53] Cache-Control:[no-cache, private] Content-Length:[216] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 21:35:53 GMT]] Body:0xc0003898c0 ContentLength:216 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004bcb40 TLS:<nil>}
I1013 21:35:53.149611   28223 retry.go:31] will retry after 49.892351ms: Temporary Error: unexpected response code: 404
I1013 21:35:53.203242   28223 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:404 Not Found StatusCode:404 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bf86a903-b268-4b91-90f1-8a79d0b91d5c] Cache-Control:[no-cache, private] Content-Length:[216] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 21:35:53 GMT]] Body:0xc0007f2a40 ContentLength:216 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cb680 TLS:<nil>}
I1013 21:35:53.203295   28223 retry.go:31] will retry after 77.085685ms: Temporary Error: unexpected response code: 404
I1013 21:35:53.283442   28223 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:404 Not Found StatusCode:404 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5233d710-6332-44a7-8fed-f2da2f957c67] Cache-Control:[no-cache, private] Content-Length:[216] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 21:35:53 GMT]] Body:0xc000bfa640 ContentLength:216 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000419cc0 TLS:<nil>}
I1013 21:35:53.283492   28223 retry.go:31] will retry after 106.477513ms: Temporary Error: unexpected response code: 404
I1013 21:35:53.394512   28223 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:404 Not Found StatusCode:404 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[18ae9d0a-1961-4c3f-97f9-8aa419d75441] Cache-Control:[no-cache, private] Content-Length:[216] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 21:35:53 GMT]] Body:0xc0003899c0 ContentLength:216 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004bd040 TLS:<nil>}
I1013 21:35:53.394579   28223 retry.go:31] will retry after 174.952724ms: Temporary Error: unexpected response code: 404
I1013 21:35:53.573214   28223 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:404 Not Found StatusCode:404 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[962c489e-625d-4d1f-8992-6134d5410380] Cache-Control:[no-cache, private] Content-Length:[216] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 21:35:53 GMT]] Body:0xc000bfa740 ContentLength:216 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cb7c0 TLS:<nil>}
I1013 21:35:53.573279   28223 retry.go:31] will retry after 232.400086ms: Temporary Error: unexpected response code: 404
I1013 21:35:53.810772   28223 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:404 Not Found StatusCode:404 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a0a51121-f51e-42e5-8a26-b249024f6ef1] Cache-Control:[no-cache, private] Content-Length:[216] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 21:35:53 GMT]] Body:0xc0007f2e40 ContentLength:216 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004bd400 TLS:<nil>}
I1013 21:35:53.810851   28223 retry.go:31] will retry after 231.528658ms: Temporary Error: unexpected response code: 404
I1013 21:35:54.046007   28223 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:404 Not Found StatusCode:404 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ec53150d-1203-4a14-bfac-fb34ebdbae9f] Cache-Control:[no-cache, private] Content-Length:[216] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 21:35:54 GMT]] Body:0xc000389a80 ContentLength:216 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000419e00 TLS:<nil>}
I1013 21:35:54.046073   28223 retry.go:31] will retry after 721.195294ms: Temporary Error: unexpected response code: 404
I1013 21:35:54.771547   28223 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:404 Not Found StatusCode:404 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ac30bec2-adb8-422b-bbc3-6c40b6538fe6] Cache-Control:[no-cache, private] Content-Length:[216] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 21:35:54 GMT]] Body:0xc00085e140 ContentLength:216 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cb900 TLS:<nil>}
I1013 21:35:54.771612   28223 retry.go:31] will retry after 743.429922ms: Temporary Error: unexpected response code: 404
I1013 21:35:55.518241   28223 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:404 Not Found StatusCode:404 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fed1bc6e-6bcb-413b-9347-7c36665be9ed] Cache-Control:[no-cache, private] Content-Length:[216] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 21:35:55 GMT]] Body:0xc000bfa840 ContentLength:216 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001856000 TLS:<nil>}
I1013 21:35:55.518289   28223 retry.go:31] will retry after 1.185575048s: Temporary Error: unexpected response code: 404
I1013 21:35:56.708836   28223 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:404 Not Found StatusCode:404 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6f267753-3293-470e-9877-37fa4abee140] Cache-Control:[no-cache, private] Content-Length:[216] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 21:35:56 GMT]] Body:0xc000389bc0 ContentLength:216 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001856140 TLS:<nil>}
I1013 21:35:56.708892   28223 retry.go:31] will retry after 2.43976428s: Temporary Error: unexpected response code: 404
I1013 21:35:59.153639   28223 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:404 Not Found StatusCode:404 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9d6cc76c-16b1-42e9-ab65-06311e5c5471] Cache-Control:[no-cache, private] Content-Length:[216] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 21:35:59 GMT]] Body:0xc000389cc0 ContentLength:216 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cbb80 TLS:<nil>}
I1013 21:35:59.153695   28223 retry.go:31] will retry after 3.5790948s: Temporary Error: unexpected response code: 404
I1013 21:36:02.737267   28223 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:404 Not Found StatusCode:404 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[61a20ff4-189d-4ff6-bc34-03fd9947824a] Cache-Control:[no-cache, private] Content-Length:[216] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 21:36:02 GMT]] Body:0xc000bfa8c0 ContentLength:216 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001856280 TLS:<nil>}
I1013 21:36:02.737331   28223 retry.go:31] will retry after 5.443011302s: Temporary Error: unexpected response code: 404
I1013 21:36:08.185367   28223 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:404 Not Found StatusCode:404 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c43aff8b-a04b-4ef6-9108-fd1ada1f19ea] Cache-Control:[no-cache, private] Content-Length:[216] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 21:36:08 GMT]] Body:0xc00085e600 ContentLength:216 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004bd540 TLS:<nil>}
I1013 21:36:08.185437   28223 retry.go:31] will retry after 5.285307256s: Temporary Error: unexpected response code: 404
I1013 21:36:13.474100   28223 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:404 Not Found StatusCode:404 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5b9ebbbb-1dee-4e0c-be42-e709037f6205] Cache-Control:[no-cache, private] Content-Length:[216] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 21:36:13 GMT]] Body:0xc000bfa980 ContentLength:216 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0018a2000 TLS:<nil>}
I1013 21:36:13.474197   28223 retry.go:31] will retry after 10.554231526s: Temporary Error: unexpected response code: 404
I1013 21:36:24.032412   28223 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:404 Not Found StatusCode:404 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[17c3c87a-50a8-404d-a176-8a4ffa35cd6f] Cache-Control:[no-cache, private] Content-Length:[216] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 21:36:24 GMT]] Body:0xc000389e00 ContentLength:216 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0018563c0 TLS:<nil>}
I1013 21:36:24.032479   28223 retry.go:31] will retry after 9.773833669s: Temporary Error: unexpected response code: 404
I1013 21:36:33.809700   28223 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:404 Not Found StatusCode:404 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8ddf306c-f4ab-496a-b924-702f99c2d5dd] Cache-Control:[no-cache, private] Content-Length:[216] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 21:36:33 GMT]] Body:0xc00085e740 ContentLength:216 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004bd900 TLS:<nil>}
I1013 21:36:33.809795   28223 retry.go:31] will retry after 18.787233011s: Temporary Error: unexpected response code: 404
I1013 21:36:52.600912   28223 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:404 Not Found StatusCode:404 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[52c22ca2-2093-4e5b-b5dc-18ba1bb91141] Cache-Control:[no-cache, private] Content-Length:[216] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 21:36:52 GMT]] Body:0xc000389ec0 ContentLength:216 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004bda40 TLS:<nil>}
I1013 21:36:52.600969   28223 retry.go:31] will retry after 40.169309193s: Temporary Error: unexpected response code: 404
I1013 21:37:32.774025   28223 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:404 Not Found StatusCode:404 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[235808d1-ddf7-4233-b1a8-98ebe05dadb3] Cache-Control:[no-cache, private] Content-Length:[216] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 21:37:32 GMT]] Body:0xc000389f40 ContentLength:216 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001856500 TLS:<nil>}
I1013 21:37:32.774080   28223 retry.go:31] will retry after 33.18466873s: Temporary Error: unexpected response code: 404
I1013 21:38:05.964674   28223 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:404 Not Found StatusCode:404 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[21613019-fcc4-4f90-9ab9-10b392e59e94] Cache-Control:[no-cache, private] Content-Length:[216] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 21:38:05 GMT]] Body:0xc000388200 ContentLength:216 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001856640 TLS:<nil>}
I1013 21:38:05.964757   28223 retry.go:31] will retry after 1m28.943262203s: Temporary Error: unexpected response code: 404
I1013 21:39:34.913454   28223 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:404 Not Found StatusCode:404 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e2cd6f4e-e62a-4d18-b12e-979192e629d2] Cache-Control:[no-cache, private] Content-Length:[216] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 21:39:34 GMT]] Body:0xc00085e180 ContentLength:216 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0018a2140 TLS:<nil>}
I1013 21:39:34.913527   28223 retry.go:31] will retry after 54.563028752s: Temporary Error: unexpected response code: 404
I1013 21:40:29.482608   28223 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:404 Not Found StatusCode:404 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[89f43b48-9b73-42c9-85e3-fc2dc3648b2d] Cache-Control:[no-cache, private] Content-Length:[216] Content-Type:[application/json] Date:[Mon, 13 Oct 2025 21:40:29 GMT]] Body:0xc000388240 ContentLength:216 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001856780 TLS:<nil>}
I1013 21:40:29.482692   28223 retry.go:31] will retry after 37.189397883s: Temporary Error: unexpected response code: 404
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-613120 -n functional-613120
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-613120 logs -n 25: (1.411265271s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-613120 ssh findmnt -T /mount2                                                                                                                     │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:39 UTC │ 13 Oct 25 21:39 UTC │
	│ ssh     │ functional-613120 ssh findmnt -T /mount3                                                                                                                     │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:39 UTC │ 13 Oct 25 21:39 UTC │
	│ mount   │ -p functional-613120 --kill=true                                                                                                                             │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:39 UTC │                     │
	│ license │                                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 13 Oct 25 21:39 UTC │ 13 Oct 25 21:39 UTC │
	│ ssh     │ functional-613120 ssh sudo systemctl is-active docker                                                                                                        │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:39 UTC │                     │
	│ ssh     │ functional-613120 ssh sudo systemctl is-active containerd                                                                                                    │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:39 UTC │                     │
	│ image   │ functional-613120 image load --daemon kicbase/echo-server:functional-613120 --alsologtostderr                                                                │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:39 UTC │ 13 Oct 25 21:39 UTC │
	│ image   │ functional-613120 image ls                                                                                                                                   │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:39 UTC │ 13 Oct 25 21:40 UTC │
	│ image   │ functional-613120 image load --daemon kicbase/echo-server:functional-613120 --alsologtostderr                                                                │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ image   │ functional-613120 image ls                                                                                                                                   │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ image   │ functional-613120 image load --daemon kicbase/echo-server:functional-613120 --alsologtostderr                                                                │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ image   │ functional-613120 image ls                                                                                                                                   │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ image   │ functional-613120 image save kicbase/echo-server:functional-613120 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ image   │ functional-613120 image rm kicbase/echo-server:functional-613120 --alsologtostderr                                                                           │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ image   │ functional-613120 image ls                                                                                                                                   │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ image   │ functional-613120 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ image   │ functional-613120 image ls                                                                                                                                   │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ image   │ functional-613120 image save --daemon kicbase/echo-server:functional-613120 --alsologtostderr                                                                │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ ssh     │ functional-613120 ssh sudo cat /etc/test/nested/copy/19947/hosts                                                                                             │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ ssh     │ functional-613120 ssh sudo cat /etc/ssl/certs/19947.pem                                                                                                      │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ ssh     │ functional-613120 ssh sudo cat /usr/share/ca-certificates/19947.pem                                                                                          │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ ssh     │ functional-613120 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                     │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ ssh     │ functional-613120 ssh sudo cat /etc/ssl/certs/199472.pem                                                                                                     │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ ssh     │ functional-613120 ssh sudo cat /usr/share/ca-certificates/199472.pem                                                                                         │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ ssh     │ functional-613120 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                     │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 21:35:51
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 21:35:51.510606   28143 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:35:51.510911   28143 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:35:51.510922   28143 out.go:374] Setting ErrFile to fd 2...
	I1013 21:35:51.510927   28143 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:35:51.511106   28143 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-15625/.minikube/bin
	I1013 21:35:51.511546   28143 out.go:368] Setting JSON to false
	I1013 21:35:51.512490   28143 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4699,"bootTime":1760386652,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 21:35:51.512576   28143 start.go:141] virtualization: kvm guest
	I1013 21:35:51.514374   28143 out.go:179] * [functional-613120] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 21:35:51.515688   28143 notify.go:220] Checking for updates...
	I1013 21:35:51.515726   28143 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 21:35:51.517033   28143 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 21:35:51.518361   28143 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-15625/kubeconfig
	I1013 21:35:51.520113   28143 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-15625/.minikube
	I1013 21:35:51.521417   28143 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 21:35:51.522605   28143 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 21:35:51.524328   28143 config.go:182] Loaded profile config "functional-613120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:35:51.524884   28143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:35:51.524955   28143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:35:51.540121   28143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45155
	I1013 21:35:51.540773   28143 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:35:51.541425   28143 main.go:141] libmachine: Using API Version  1
	I1013 21:35:51.541445   28143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:35:51.541933   28143 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:35:51.542144   28143 main.go:141] libmachine: (functional-613120) Calling .DriverName
	I1013 21:35:51.542422   28143 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 21:35:51.542767   28143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:35:51.542806   28143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:35:51.559961   28143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45987
	I1013 21:35:51.560583   28143 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:35:51.561142   28143 main.go:141] libmachine: Using API Version  1
	I1013 21:35:51.561179   28143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:35:51.561531   28143 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:35:51.561724   28143 main.go:141] libmachine: (functional-613120) Calling .DriverName
	I1013 21:35:51.597447   28143 out.go:179] * Using the kvm2 driver based on existing profile
	I1013 21:35:51.598791   28143 start.go:305] selected driver: kvm2
	I1013 21:35:51.598811   28143 start.go:925] validating driver "kvm2" against &{Name:functional-613120 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-613120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.113 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:35:51.598947   28143 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 21:35:51.600183   28143 cni.go:84] Creating CNI manager for ""
	I1013 21:35:51.600254   28143 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1013 21:35:51.600313   28143 start.go:349] cluster config:
	{Name:functional-613120 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-613120 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.113 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:35:51.602828   28143 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 13 21:40:52 functional-613120 crio[5570]: time="2025-10-13 21:40:52.501162802Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:713d35e968dc436254f1c9bbf0844707b99573dab6d494bb87bf55faf2fbb173,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-6vfs9,Uid:c6d88d85-406b-4868-99c7-8ab32c2b31f6,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1760391097959965301,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-6vfs9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d88d85-406b-4868-99c7-8ab32c2b31f6,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-13T21:31:37.437890061Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8057a650674770b70138555ba30614fc2cc2ecc0d734aadd18368a15c4884201,Metadata:&PodSandboxMetadata{Name:kube-proxy-kcjdv,Uid:89be2539-f688-4d2f-b897-965ff79df2fb,Namespace:kube-system,At
tempt:3,},State:SANDBOX_READY,CreatedAt:1760391097797084414,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-kcjdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89be2539-f688-4d2f-b897-965ff79df2fb,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-13T21:31:37.437886664Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e8679e9415572c2e6f5bedacae057281f3ed15cca43b21e4988db957b7c4d9d9,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-613120,Uid:5ffd65e64de9ca81f1b2e2c2257415c6,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1760391092968024701,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ffd65e64de9ca81f1b2e2c2257415c6,tier: control-plane,},Annotat
ions:map[string]string{kubernetes.io/config.hash: 5ffd65e64de9ca81f1b2e2c2257415c6,kubernetes.io/config.seen: 2025-10-13T21:31:32.441134376Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e1ecf0fe1c6ec82928d9eae5f2be878362a35243599e34195c81af1d90215b7f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-functional-613120,Uid:522f7dd28b9425c450ca359799bcd7d7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760391092952910106,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522f7dd28b9425c450ca359799bcd7d7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.113:8441,kubernetes.io/config.hash: 522f7dd28b9425c450ca359799bcd7d7,kubernetes.io/config.seen: 2025-10-13T21:31:32.441132398Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{I
d:5de8ccaa96a09de423704317c2aa3d0a4bcb42c975d97a64c39243f47fc2778f,Metadata:&PodSandboxMetadata{Name:etcd-functional-613120,Uid:f324dff40f2879312cdb44fbeb3c82c4,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1760391092934953549,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f324dff40f2879312cdb44fbeb3c82c4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.113:2379,kubernetes.io/config.hash: f324dff40f2879312cdb44fbeb3c82c4,kubernetes.io/config.seen: 2025-10-13T21:31:32.441128741Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=b81584cf-83d2-47fc-a6af-4edb4e8501aa name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 13 21:40:52 functional-613120 crio[5570]: time="2025-10-13 21:40:52.502217746Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c9882bbd-348c-48f5-ad78-86ff905da013 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:40:52 functional-613120 crio[5570]: time="2025-10-13 21:40:52.502272244Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c9882bbd-348c-48f5-ad78-86ff905da013 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:40:52 functional-613120 crio[5570]: time="2025-10-13 21:40:52.502454623Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e07550cbe1c903805740ae03145586d5dbf3c2e2f5d75935a7f0d7dbd8017074,PodSandboxId:713d35e968dc436254f1c9bbf0844707b99573dab6d494bb87bf55faf2fbb173,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760391098418435883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6vfs9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d88d85-406b-4868-99c7-8ab32c2b31f6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f81ad801f39c2510afbfcfc6d287b87778ab5712e388c9261c4bd1ae4b5574f9,PodSandboxId:8057a650674770b70138555ba30614fc2cc2ecc0d734aadd18368a15c4884201,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760391097978806084,Labels:map[str
ing]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcjdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89be2539-f688-4d2f-b897-965ff79df2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66242d56b190812f28a153441bb55d10bd96659f846105a21d5bdce146835c25,PodSandboxId:e1ecf0fe1c6ec82928d9eae5f2be878362a35243599e34195c81af1d90215b7f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760391093158187758,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522f7dd28b9425c450ca359799bcd7d7,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8df3188b3da91c87564c67437665edcf88864daad57fd6b8755dcaa54607cd9,PodSandboxId:e8679e9415572c2e6f5bedacae057281f3ed15cca43b21e4988db957b7c4d9d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379
ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760391093168647063,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ffd65e64de9ca81f1b2e2c2257415c6,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea72e205f15e9bee347c672920aa3c49cc8c2992554819e66d56891dc5671f80,PodSandboxId:5de8ccaa96a09de423704317c2aa3d0a4bcb42c975d97a64c39243f47fc2778f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},U
serSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760391093133706293,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f324dff40f2879312cdb44fbeb3c82c4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c9882bbd-348c-48f5-ad78-86ff905da013 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:40:52 functional-613120 crio[5570]: time="2025-10-13 21:40:52.513236283Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6199590c-4739-447b-b3a6-3471168d97e4 name=/runtime.v1.RuntimeService/Version
	Oct 13 21:40:52 functional-613120 crio[5570]: time="2025-10-13 21:40:52.513432811Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6199590c-4739-447b-b3a6-3471168d97e4 name=/runtime.v1.RuntimeService/Version
	Oct 13 21:40:52 functional-613120 crio[5570]: time="2025-10-13 21:40:52.514681827Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=78a390b5-c99d-4931-a412-9131a38cc78c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 13 21:40:52 functional-613120 crio[5570]: time="2025-10-13 21:40:52.515252998Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760391652515233031,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:175579,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=78a390b5-c99d-4931-a412-9131a38cc78c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 13 21:40:52 functional-613120 crio[5570]: time="2025-10-13 21:40:52.515796892Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6dbdc858-f7f4-4316-a98c-9c67b0ca6934 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:40:52 functional-613120 crio[5570]: time="2025-10-13 21:40:52.515847056Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6dbdc858-f7f4-4316-a98c-9c67b0ca6934 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:40:52 functional-613120 crio[5570]: time="2025-10-13 21:40:52.516132753Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:79e6030984a0e904e8e9add2aae36132b41cddb0167069e4c35e01ec13f8a0ec,PodSandboxId:a93df8828ff74effd38efe8eeb5659091f56b1a22ee639ad7fc4918ab4900322,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1760391591337665493,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c40a2d9a-c334-4b51-8f13-dd88c18eed33,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e07550cbe1c903805740ae03145586d5dbf3c2e2f5d75935a7f0d7dbd8017074,PodSandboxId:713d35e968dc436254f1c9bbf0844707b99573dab6d494bb87bf55faf2fbb173,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760391098418435883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6vfs9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d88d85-406b-4868-99c7-8ab32c2b31f6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f81ad801f39c2510afbfcfc6d287b87778ab5712e388c9261c4bd1ae4b5574f9,PodSandboxId:8057a650674770b70138555ba30614fc2cc2ecc0d734aadd18368a15c4884201,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760391097978806084,Labels:map[strin
g]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcjdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89be2539-f688-4d2f-b897-965ff79df2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66242d56b190812f28a153441bb55d10bd96659f846105a21d5bdce146835c25,PodSandboxId:e1ecf0fe1c6ec82928d9eae5f2be878362a35243599e34195c81af1d90215b7f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760391093158187758,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522f7dd28b9425c450ca359799bcd7d7,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8df3188b3da91c87564c67437665edcf88864daad57fd6b8755dcaa54607cd9,PodSandboxId:e8679e9415572c2e6f5bedacae057281f3ed15cca43b21e4988db957b7c4d9d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee
9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760391093168647063,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ffd65e64de9ca81f1b2e2c2257415c6,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea72e205f15e9bee347c672920aa3c49cc8c2992554819e66d56891dc5671f80,PodSandboxId:5de8ccaa96a09de423704317c2aa3d0a4bcb42c975d97a64c39243f47fc2778f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760391093133706293,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f324dff40f2879312cdb44fbeb3c82c4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cccca560ba245f49986c6f2272ebfe892f25d04e2e81fb10ba7f7102e95d4f15,PodSandboxId:823c5e67c612e79bd1481ee57b3a12b18dc0fde2660f213723bea2dc6a52629e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924a
a3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760390968666210571,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6vfs9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d88d85-406b-4868-99c7-8ab32c2b31f6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5acb4b6ac1a5d2a7093346c8e61251e661880c4d307588f204e6d925ab920c0a,PodSandboxId:1a5f90e559f5a459645f680193f727145f0496c536bb728c5443d0b58a571e83,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1760390962618532579,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43ed88cc-603c-41f3-a3d9-9ad0eea42a63,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.ter
minationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203bfd3c7945723b3a1f274ccbd4c49bb08bebdfafdd3cfd3d0a62665ed76dac,PodSandboxId:33858ed7ddb02fa41013fcea22864e7ee8564db2287f34817a1d4cab7cd3fdfe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760390962553977066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ffd65e64de9ca81f1b2e2c2257415c6,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b44a46bca9e30d64bfbfd3eae943fafba4b4c1c92fb594f514a3bd091ffd0e87,PodSandboxId:c8f88dad0e67d97049df6d67b5732fa21b9356aa2ba9f89ffe054e6a29e605f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760390962519561589,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcjdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89be2539-f688-4d2f-b897-965ff79df2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26fd846e4671c2a18eaac3936dd0fe9f0081cd0dcee0c249a8c39512f64eb976,PodSandboxId:414c394d4f5087a156cc90d0db1ac29c41a8d2f5517bee2bd7970d5d6e639121,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760390962485245021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f324dff40f2879312cdb44fbeb3c82c4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee4b6616f5b7b67eb758e46052afdb66608eb700a52ac609b016e101814cbc5,PodSandboxId:08cbfc93ded1a60d8a97fe776f9104b7e6367a21117836e3850a5abb5cd6865d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760390962229097708,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79175203d8cb7407956c87ca1d03921b,},Annotations:map[string]string{io.kuberne
tes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6dbdc858-f7f4-4316-a98c-9c67b0ca6934 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:40:52 functional-613120 crio[5570]: time="2025-10-13 21:40:52.556124942Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e823262b-0165-47d4-b522-0cd3bc922d05 name=/runtime.v1.RuntimeService/Version
	Oct 13 21:40:52 functional-613120 crio[5570]: time="2025-10-13 21:40:52.556214610Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e823262b-0165-47d4-b522-0cd3bc922d05 name=/runtime.v1.RuntimeService/Version
	Oct 13 21:40:52 functional-613120 crio[5570]: time="2025-10-13 21:40:52.557309996Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=86dd9816-f007-4622-a30f-98f97d2cc22d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 13 21:40:52 functional-613120 crio[5570]: time="2025-10-13 21:40:52.558551383Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760391652558527067,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:175579,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=86dd9816-f007-4622-a30f-98f97d2cc22d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 13 21:40:52 functional-613120 crio[5570]: time="2025-10-13 21:40:52.559297616Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3b0b0ed5-570e-4bef-bbbb-e5c84571bfd3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:40:52 functional-613120 crio[5570]: time="2025-10-13 21:40:52.559650193Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3b0b0ed5-570e-4bef-bbbb-e5c84571bfd3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:40:52 functional-613120 crio[5570]: time="2025-10-13 21:40:52.560000668Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:79e6030984a0e904e8e9add2aae36132b41cddb0167069e4c35e01ec13f8a0ec,PodSandboxId:a93df8828ff74effd38efe8eeb5659091f56b1a22ee639ad7fc4918ab4900322,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1760391591337665493,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c40a2d9a-c334-4b51-8f13-dd88c18eed33,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e07550cbe1c903805740ae03145586d5dbf3c2e2f5d75935a7f0d7dbd8017074,PodSandboxId:713d35e968dc436254f1c9bbf0844707b99573dab6d494bb87bf55faf2fbb173,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760391098418435883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6vfs9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d88d85-406b-4868-99c7-8ab32c2b31f6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f81ad801f39c2510afbfcfc6d287b87778ab5712e388c9261c4bd1ae4b5574f9,PodSandboxId:8057a650674770b70138555ba30614fc2cc2ecc0d734aadd18368a15c4884201,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760391097978806084,Labels:map[strin
g]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcjdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89be2539-f688-4d2f-b897-965ff79df2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66242d56b190812f28a153441bb55d10bd96659f846105a21d5bdce146835c25,PodSandboxId:e1ecf0fe1c6ec82928d9eae5f2be878362a35243599e34195c81af1d90215b7f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760391093158187758,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522f7dd28b9425c450ca359799bcd7d7,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8df3188b3da91c87564c67437665edcf88864daad57fd6b8755dcaa54607cd9,PodSandboxId:e8679e9415572c2e6f5bedacae057281f3ed15cca43b21e4988db957b7c4d9d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee
9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760391093168647063,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ffd65e64de9ca81f1b2e2c2257415c6,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea72e205f15e9bee347c672920aa3c49cc8c2992554819e66d56891dc5671f80,PodSandboxId:5de8ccaa96a09de423704317c2aa3d0a4bcb42c975d97a64c39243f47fc2778f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760391093133706293,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f324dff40f2879312cdb44fbeb3c82c4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cccca560ba245f49986c6f2272ebfe892f25d04e2e81fb10ba7f7102e95d4f15,PodSandboxId:823c5e67c612e79bd1481ee57b3a12b18dc0fde2660f213723bea2dc6a52629e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924a
a3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760390968666210571,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6vfs9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d88d85-406b-4868-99c7-8ab32c2b31f6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5acb4b6ac1a5d2a7093346c8e61251e661880c4d307588f204e6d925ab920c0a,PodSandboxId:1a5f90e559f5a459645f680193f727145f0496c536bb728c5443d0b58a571e83,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1760390962618532579,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43ed88cc-603c-41f3-a3d9-9ad0eea42a63,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.ter
minationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203bfd3c7945723b3a1f274ccbd4c49bb08bebdfafdd3cfd3d0a62665ed76dac,PodSandboxId:33858ed7ddb02fa41013fcea22864e7ee8564db2287f34817a1d4cab7cd3fdfe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760390962553977066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ffd65e64de9ca81f1b2e2c2257415c6,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b44a46bca9e30d64bfbfd3eae943fafba4b4c1c92fb594f514a3bd091ffd0e87,PodSandboxId:c8f88dad0e67d97049df6d67b5732fa21b9356aa2ba9f89ffe054e6a29e605f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760390962519561589,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcjdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89be2539-f688-4d2f-b897-965ff79df2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26fd846e4671c2a18eaac3936dd0fe9f0081cd0dcee0c249a8c39512f64eb976,PodSandboxId:414c394d4f5087a156cc90d0db1ac29c41a8d2f5517bee2bd7970d5d6e639121,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760390962485245021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f324dff40f2879312cdb44fbeb3c82c4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee4b6616f5b7b67eb758e46052afdb66608eb700a52ac609b016e101814cbc5,PodSandboxId:08cbfc93ded1a60d8a97fe776f9104b7e6367a21117836e3850a5abb5cd6865d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760390962229097708,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79175203d8cb7407956c87ca1d03921b,},Annotations:map[string]string{io.kuberne
tes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3b0b0ed5-570e-4bef-bbbb-e5c84571bfd3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:40:52 functional-613120 crio[5570]: time="2025-10-13 21:40:52.597536871Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e7865f83-6369-49a6-a832-ada4507f965a name=/runtime.v1.RuntimeService/Version
	Oct 13 21:40:52 functional-613120 crio[5570]: time="2025-10-13 21:40:52.597628576Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e7865f83-6369-49a6-a832-ada4507f965a name=/runtime.v1.RuntimeService/Version
	Oct 13 21:40:52 functional-613120 crio[5570]: time="2025-10-13 21:40:52.599107455Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e587c64d-8220-4782-adfc-f6cba7dfe17b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 13 21:40:52 functional-613120 crio[5570]: time="2025-10-13 21:40:52.599721637Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760391652599659893,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:175579,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e587c64d-8220-4782-adfc-f6cba7dfe17b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 13 21:40:52 functional-613120 crio[5570]: time="2025-10-13 21:40:52.600509947Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6a3098b0-bb6b-40d0-a2f5-436e739e3881 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:40:52 functional-613120 crio[5570]: time="2025-10-13 21:40:52.600586197Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6a3098b0-bb6b-40d0-a2f5-436e739e3881 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:40:52 functional-613120 crio[5570]: time="2025-10-13 21:40:52.600808116Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:79e6030984a0e904e8e9add2aae36132b41cddb0167069e4c35e01ec13f8a0ec,PodSandboxId:a93df8828ff74effd38efe8eeb5659091f56b1a22ee639ad7fc4918ab4900322,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1760391591337665493,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c40a2d9a-c334-4b51-8f13-dd88c18eed33,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e07550cbe1c903805740ae03145586d5dbf3c2e2f5d75935a7f0d7dbd8017074,PodSandboxId:713d35e968dc436254f1c9bbf0844707b99573dab6d494bb87bf55faf2fbb173,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760391098418435883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6vfs9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d88d85-406b-4868-99c7-8ab32c2b31f6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f81ad801f39c2510afbfcfc6d287b87778ab5712e388c9261c4bd1ae4b5574f9,PodSandboxId:8057a650674770b70138555ba30614fc2cc2ecc0d734aadd18368a15c4884201,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760391097978806084,Labels:map[strin
g]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcjdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89be2539-f688-4d2f-b897-965ff79df2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66242d56b190812f28a153441bb55d10bd96659f846105a21d5bdce146835c25,PodSandboxId:e1ecf0fe1c6ec82928d9eae5f2be878362a35243599e34195c81af1d90215b7f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760391093158187758,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522f7dd28b9425c450ca359799bcd7d7,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8df3188b3da91c87564c67437665edcf88864daad57fd6b8755dcaa54607cd9,PodSandboxId:e8679e9415572c2e6f5bedacae057281f3ed15cca43b21e4988db957b7c4d9d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee
9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760391093168647063,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ffd65e64de9ca81f1b2e2c2257415c6,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea72e205f15e9bee347c672920aa3c49cc8c2992554819e66d56891dc5671f80,PodSandboxId:5de8ccaa96a09de423704317c2aa3d0a4bcb42c975d97a64c39243f47fc2778f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760391093133706293,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f324dff40f2879312cdb44fbeb3c82c4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cccca560ba245f49986c6f2272ebfe892f25d04e2e81fb10ba7f7102e95d4f15,PodSandboxId:823c5e67c612e79bd1481ee57b3a12b18dc0fde2660f213723bea2dc6a52629e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924a
a3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760390968666210571,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6vfs9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d88d85-406b-4868-99c7-8ab32c2b31f6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5acb4b6ac1a5d2a7093346c8e61251e661880c4d307588f204e6d925ab920c0a,PodSandboxId:1a5f90e559f5a459645f680193f727145f0496c536bb728c5443d0b58a571e83,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1760390962618532579,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43ed88cc-603c-41f3-a3d9-9ad0eea42a63,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.ter
minationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203bfd3c7945723b3a1f274ccbd4c49bb08bebdfafdd3cfd3d0a62665ed76dac,PodSandboxId:33858ed7ddb02fa41013fcea22864e7ee8564db2287f34817a1d4cab7cd3fdfe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760390962553977066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ffd65e64de9ca81f1b2e2c2257415c6,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b44a46bca9e30d64bfbfd3eae943fafba4b4c1c92fb594f514a3bd091ffd0e87,PodSandboxId:c8f88dad0e67d97049df6d67b5732fa21b9356aa2ba9f89ffe054e6a29e605f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760390962519561589,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcjdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89be2539-f688-4d2f-b897-965ff79df2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26fd846e4671c2a18eaac3936dd0fe9f0081cd0dcee0c249a8c39512f64eb976,PodSandboxId:414c394d4f5087a156cc90d0db1ac29c41a8d2f5517bee2bd7970d5d6e639121,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760390962485245021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f324dff40f2879312cdb44fbeb3c82c4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee4b6616f5b7b67eb758e46052afdb66608eb700a52ac609b016e101814cbc5,PodSandboxId:08cbfc93ded1a60d8a97fe776f9104b7e6367a21117836e3850a5abb5cd6865d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760390962229097708,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79175203d8cb7407956c87ca1d03921b,},Annotations:map[string]string{io.kuberne
tes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6a3098b0-bb6b-40d0-a2f5-436e739e3881 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	79e6030984a0e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   About a minute ago   Exited              mount-munger              0                   a93df8828ff74       busybox-mount
	e07550cbe1c90       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      9 minutes ago        Running             coredns                   2                   713d35e968dc4       coredns-66bc5c9577-6vfs9
	f81ad801f39c2       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      9 minutes ago        Running             kube-proxy                2                   8057a65067477       kube-proxy-kcjdv
	c8df3188b3da9       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      9 minutes ago        Running             kube-scheduler            2                   e8679e9415572       kube-scheduler-functional-613120
	66242d56b1908       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      9 minutes ago        Running             kube-apiserver            0                   e1ecf0fe1c6ec       kube-apiserver-functional-613120
	ea72e205f15e9       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      9 minutes ago        Running             etcd                      2                   5de8ccaa96a09       etcd-functional-613120
	cccca560ba245       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 minutes ago       Exited              coredns                   1                   823c5e67c612e       coredns-66bc5c9577-6vfs9
	5acb4b6ac1a5d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 minutes ago       Exited              storage-provisioner       1                   1a5f90e559f5a       storage-provisioner
	203bfd3c79457       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      11 minutes ago       Exited              kube-scheduler            1                   33858ed7ddb02       kube-scheduler-functional-613120
	b44a46bca9e30       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      11 minutes ago       Exited              kube-proxy                1                   c8f88dad0e67d       kube-proxy-kcjdv
	26fd846e4671c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      11 minutes ago       Exited              etcd                      1                   414c394d4f508       etcd-functional-613120
	6ee4b6616f5b7       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      11 minutes ago       Exited              kube-controller-manager   1                   08cbfc93ded1a       kube-controller-manager-functional-613120
	
	
	==> coredns [cccca560ba245f49986c6f2272ebfe892f25d04e2e81fb10ba7f7102e95d4f15] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42456 - 27503 "HINFO IN 7724600574698421821.8526601767399151930. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.049782112s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e07550cbe1c903805740ae03145586d5dbf3c2e2f5d75935a7f0d7dbd8017074] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55800 - 5148 "HINFO IN 1677522805808455981.5721645023815659663. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.036495987s
	
	
	==> describe nodes <==
	Name:               functional-613120
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-613120
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=functional-613120
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T21_28_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 21:28:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-613120
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 21:40:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 21:40:06 +0000   Mon, 13 Oct 2025 21:28:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 21:40:06 +0000   Mon, 13 Oct 2025 21:28:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 21:40:06 +0000   Mon, 13 Oct 2025 21:28:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 21:40:06 +0000   Mon, 13 Oct 2025 21:28:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.113
	  Hostname:    functional-613120
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 16a6c3b9eff6414082874fcb18b5974c
	  System UUID:                16a6c3b9-eff6-4140-8287-4fcb18b5974c
	  Boot ID:                    08d1688d-f04d-4990-8e88-f64344bac422
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-6vfs9                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-functional-613120                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-613120             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-controller-manager-functional-613120    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-kcjdv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-613120             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 11m                    kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  Starting                 9m14s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node functional-613120 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node functional-613120 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node functional-613120 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    12m                    kubelet          Node functional-613120 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  12m                    kubelet          Node functional-613120 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     12m                    kubelet          Node functional-613120 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                    kubelet          Starting kubelet.
	  Normal  NodeReady                12m                    kubelet          Node functional-613120 status is now: NodeReady
	  Normal  RegisteredNode           12m                    node-controller  Node functional-613120 event: Registered Node functional-613120 in Controller
	  Normal  Starting                 11m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node functional-613120 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node functional-613120 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)      kubelet          Node functional-613120 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                    node-controller  Node functional-613120 event: Registered Node functional-613120 in Controller
	  Normal  NodeHasNoDiskPressure    9m20s (x8 over 9m20s)  kubelet          Node functional-613120 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  9m20s (x8 over 9m20s)  kubelet          Node functional-613120 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     9m20s (x7 over 9m20s)  kubelet          Node functional-613120 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m20s                  kubelet          Starting kubelet.
	
	
	==> dmesg <==
	[Oct13 21:27] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000064] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.008604] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[Oct13 21:28] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000021] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.088305] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.096018] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.147837] kauditd_printk_skb: 171 callbacks suppressed
	[  +5.872269] kauditd_printk_skb: 18 callbacks suppressed
	[ +11.344503] kauditd_printk_skb: 243 callbacks suppressed
	[Oct13 21:29] kauditd_printk_skb: 38 callbacks suppressed
	[  +7.696503] kauditd_printk_skb: 56 callbacks suppressed
	[  +4.771558] kauditd_printk_skb: 290 callbacks suppressed
	[ +14.216137] kauditd_printk_skb: 23 callbacks suppressed
	[ +13.213321] kauditd_printk_skb: 12 callbacks suppressed
	[Oct13 21:31] kauditd_printk_skb: 209 callbacks suppressed
	[  +5.599027] kauditd_printk_skb: 153 callbacks suppressed
	[Oct13 21:32] kauditd_printk_skb: 98 callbacks suppressed
	[Oct13 21:35] kauditd_printk_skb: 16 callbacks suppressed
	[  +3.169795] kauditd_printk_skb: 63 callbacks suppressed
	[Oct13 21:39] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.987661] kauditd_printk_skb: 59 callbacks suppressed
	
	
	==> etcd [26fd846e4671c2a18eaac3936dd0fe9f0081cd0dcee0c249a8c39512f64eb976] <==
	{"level":"warn","ts":"2025-10-13T21:29:26.617665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:29:26.623351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:29:26.631487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:29:26.641884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:29:26.655241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:29:26.658347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:29:26.717551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56654","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-13T21:29:50.857092Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-13T21:29:50.857237Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-613120","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.113:2380"],"advertise-client-urls":["https://192.168.39.113:2379"]}
	{"level":"error","ts":"2025-10-13T21:29:50.857324Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-13T21:29:50.938243Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-13T21:29:50.938335Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T21:29:50.938357Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8069059f79d446ff","current-leader-member-id":"8069059f79d446ff"}
	{"level":"info","ts":"2025-10-13T21:29:50.938508Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-13T21:29:50.938555Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-13T21:29:50.938568Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-13T21:29:50.938627Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-13T21:29:50.938633Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-13T21:29:50.938667Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.113:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-13T21:29:50.938674Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.113:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-13T21:29:50.938679Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.113:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T21:29:50.942024Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.113:2380"}
	{"level":"error","ts":"2025-10-13T21:29:50.942099Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.113:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T21:29:50.942123Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.113:2380"}
	{"level":"info","ts":"2025-10-13T21:29:50.942129Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-613120","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.113:2380"],"advertise-client-urls":["https://192.168.39.113:2379"]}
	
	
	==> etcd [ea72e205f15e9bee347c672920aa3c49cc8c2992554819e66d56891dc5671f80] <==
	{"level":"warn","ts":"2025-10-13T21:31:34.930526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:34.940219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:34.949732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:34.957914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:34.964462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:34.970099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:34.979868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:34.985236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:34.996943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.011573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.017606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.025865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.035148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.043812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.052613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.062810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.076994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.083661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.091500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.098948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.109285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.119002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.129117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.135664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.209541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44616","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:40:53 up 13 min,  0 users,  load average: 0.31, 0.47, 0.44
	Linux functional-613120 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [66242d56b190812f28a153441bb55d10bd96659f846105a21d5bdce146835c25] <==
	I1013 21:31:35.989738       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1013 21:31:35.989941       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1013 21:31:35.994336       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1013 21:31:35.996139       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1013 21:31:35.996251       1 aggregator.go:171] initial CRD sync complete...
	I1013 21:31:35.996323       1 autoregister_controller.go:144] Starting autoregister controller
	I1013 21:31:35.996330       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 21:31:35.996335       1 cache.go:39] Caches are synced for autoregister controller
	I1013 21:31:35.998223       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1013 21:31:36.001759       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1013 21:31:36.015526       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1013 21:31:36.016269       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1013 21:31:36.790607       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 21:31:37.524043       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 21:31:37.700261       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 21:31:37.777593       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1013 21:31:37.846439       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 21:31:37.863830       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 21:35:46.309636       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.97.121.66"}
	I1013 21:35:50.986537       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.180.30"}
	I1013 21:35:52.706618       1 controller.go:667] quota admission added evaluator for: namespaces
	I1013 21:35:52.877867       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.45.50"}
	I1013 21:35:52.901108       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.97.166"}
	I1013 21:35:53.478919       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.100.177.172"}
	I1013 21:40:06.127459       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.109.19.102"}
	
	
	==> kube-controller-manager [6ee4b6616f5b7b67eb758e46052afdb66608eb700a52ac609b016e101814cbc5] <==
	I1013 21:29:30.142439       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 21:29:30.142680       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1013 21:29:30.142877       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 21:29:30.142688       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1013 21:29:30.142705       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1013 21:29:30.142939       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-613120"
	I1013 21:29:30.143010       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1013 21:29:30.142698       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1013 21:29:30.142711       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1013 21:29:30.143817       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 21:29:30.144966       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1013 21:29:30.146454       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1013 21:29:30.147790       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 21:29:30.154058       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1013 21:29:30.160261       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1013 21:29:30.163983       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1013 21:29:30.167909       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 21:29:30.172471       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 21:29:30.177559       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 21:29:30.178764       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 21:29:30.204417       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 21:29:30.204466       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 21:29:30.204473       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 21:29:30.208646       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 21:29:30.209816       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [b44a46bca9e30d64bfbfd3eae943fafba4b4c1c92fb594f514a3bd091ffd0e87] <==
	I1013 21:29:28.884906       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 21:29:28.988240       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 21:29:28.988284       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.113"]
	E1013 21:29:28.988364       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 21:29:29.063938       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1013 21:29:29.064138       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1013 21:29:29.064231       1 server_linux.go:132] "Using iptables Proxier"
	I1013 21:29:29.075846       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 21:29:29.077723       1 server.go:527] "Version info" version="v1.34.1"
	I1013 21:29:29.077739       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 21:29:29.084538       1 config.go:200] "Starting service config controller"
	I1013 21:29:29.084550       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 21:29:29.084569       1 config.go:106] "Starting endpoint slice config controller"
	I1013 21:29:29.084574       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 21:29:29.084584       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 21:29:29.084587       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 21:29:29.085167       1 config.go:309] "Starting node config controller"
	I1013 21:29:29.085173       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 21:29:29.085178       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 21:29:29.185174       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 21:29:29.185311       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 21:29:29.185334       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [f81ad801f39c2510afbfcfc6d287b87778ab5712e388c9261c4bd1ae4b5574f9] <==
	I1013 21:31:38.397204       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 21:31:38.498253       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 21:31:38.498309       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.113"]
	E1013 21:31:38.498450       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 21:31:38.666515       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1013 21:31:38.666590       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1013 21:31:38.666620       1 server_linux.go:132] "Using iptables Proxier"
	I1013 21:31:38.681690       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 21:31:38.682623       1 server.go:527] "Version info" version="v1.34.1"
	I1013 21:31:38.682729       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 21:31:38.697128       1 config.go:309] "Starting node config controller"
	I1013 21:31:38.697163       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 21:31:38.697169       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 21:31:38.697425       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 21:31:38.697433       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 21:31:38.697497       1 config.go:200] "Starting service config controller"
	I1013 21:31:38.697501       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 21:31:38.697512       1 config.go:106] "Starting endpoint slice config controller"
	I1013 21:31:38.697515       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 21:31:38.798520       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 21:31:38.798568       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 21:31:38.799526       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [203bfd3c7945723b3a1f274ccbd4c49bb08bebdfafdd3cfd3d0a62665ed76dac] <==
	I1013 21:29:25.656553       1 serving.go:386] Generated self-signed cert in-memory
	W1013 21:29:27.329435       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1013 21:29:27.329481       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1013 21:29:27.329491       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1013 21:29:27.329497       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1013 21:29:27.423794       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 21:29:27.423893       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 21:29:27.435704       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 21:29:27.435833       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 21:29:27.435914       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:29:27.435925       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:29:27.537585       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:29:50.858932       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1013 21:29:50.858967       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1013 21:29:50.859022       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1013 21:29:50.859063       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:29:50.859302       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1013 21:29:50.859326       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c8df3188b3da91c87564c67437665edcf88864daad57fd6b8755dcaa54607cd9] <==
	I1013 21:31:34.411646       1 serving.go:386] Generated self-signed cert in-memory
	W1013 21:31:35.836891       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1013 21:31:35.837475       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1013 21:31:35.837536       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1013 21:31:35.837555       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1013 21:31:35.902134       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 21:31:35.902174       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 21:31:35.907484       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:31:35.907564       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:31:35.914613       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 21:31:35.911364       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E1013 21:31:35.924726       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1013 21:31:35.931219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1013 21:31:35.934801       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 21:31:35.937059       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 21:31:35.937223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 21:31:35.937291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 21:31:35.937354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1013 21:31:36.808054       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 21:40:31 functional-613120 kubelet[5912]: E1013 21:40:31.511627    5912 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-functional-613120_kube-system(79175203d8cb7407956c87ca1d03921b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-functional-613120_kube-system(79175203d8cb7407956c87ca1d03921b)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-controller-manager-functional-613120_kube-system_79175203d8cb7407956c87ca1d03921b_2\\\" already exists\"" pod="kube-system/kube-controller-manager-functional-613120" podUID="79175203d8cb7407956c87ca1d03921b"
	Oct 13 21:40:32 functional-613120 kubelet[5912]: E1013 21:40:32.579560    5912 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod79175203d8cb7407956c87ca1d03921b/crio-08cbfc93ded1a60d8a97fe776f9104b7e6367a21117836e3850a5abb5cd6865d: Error finding container 08cbfc93ded1a60d8a97fe776f9104b7e6367a21117836e3850a5abb5cd6865d: Status 404 returned error can't find the container with id 08cbfc93ded1a60d8a97fe776f9104b7e6367a21117836e3850a5abb5cd6865d
	Oct 13 21:40:32 functional-613120 kubelet[5912]: E1013 21:40:32.579948    5912 manager.go:1116] Failed to create existing container: /kubepods/burstable/podf324dff40f2879312cdb44fbeb3c82c4/crio-414c394d4f5087a156cc90d0db1ac29c41a8d2f5517bee2bd7970d5d6e639121: Error finding container 414c394d4f5087a156cc90d0db1ac29c41a8d2f5517bee2bd7970d5d6e639121: Status 404 returned error can't find the container with id 414c394d4f5087a156cc90d0db1ac29c41a8d2f5517bee2bd7970d5d6e639121
	Oct 13 21:40:32 functional-613120 kubelet[5912]: E1013 21:40:32.580189    5912 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod89be2539-f688-4d2f-b897-965ff79df2fb/crio-c8f88dad0e67d97049df6d67b5732fa21b9356aa2ba9f89ffe054e6a29e605f9: Error finding container c8f88dad0e67d97049df6d67b5732fa21b9356aa2ba9f89ffe054e6a29e605f9: Status 404 returned error can't find the container with id c8f88dad0e67d97049df6d67b5732fa21b9356aa2ba9f89ffe054e6a29e605f9
	Oct 13 21:40:32 functional-613120 kubelet[5912]: E1013 21:40:32.580855    5912 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod43ed88cc-603c-41f3-a3d9-9ad0eea42a63/crio-1a5f90e559f5a459645f680193f727145f0496c536bb728c5443d0b58a571e83: Error finding container 1a5f90e559f5a459645f680193f727145f0496c536bb728c5443d0b58a571e83: Status 404 returned error can't find the container with id 1a5f90e559f5a459645f680193f727145f0496c536bb728c5443d0b58a571e83
	Oct 13 21:40:32 functional-613120 kubelet[5912]: E1013 21:40:32.581141    5912 manager.go:1116] Failed to create existing container: /kubepods/burstable/podc6d88d85-406b-4868-99c7-8ab32c2b31f6/crio-823c5e67c612e79bd1481ee57b3a12b18dc0fde2660f213723bea2dc6a52629e: Error finding container 823c5e67c612e79bd1481ee57b3a12b18dc0fde2660f213723bea2dc6a52629e: Status 404 returned error can't find the container with id 823c5e67c612e79bd1481ee57b3a12b18dc0fde2660f213723bea2dc6a52629e
	Oct 13 21:40:32 functional-613120 kubelet[5912]: E1013 21:40:32.581475    5912 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod5ffd65e64de9ca81f1b2e2c2257415c6/crio-33858ed7ddb02fa41013fcea22864e7ee8564db2287f34817a1d4cab7cd3fdfe: Error finding container 33858ed7ddb02fa41013fcea22864e7ee8564db2287f34817a1d4cab7cd3fdfe: Status 404 returned error can't find the container with id 33858ed7ddb02fa41013fcea22864e7ee8564db2287f34817a1d4cab7cd3fdfe
	Oct 13 21:40:32 functional-613120 kubelet[5912]: E1013 21:40:32.722239    5912 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760391632721809172  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175579}  inodes_used:{value:87}}"
	Oct 13 21:40:32 functional-613120 kubelet[5912]: E1013 21:40:32.722286    5912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760391632721809172  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175579}  inodes_used:{value:87}}"
	Oct 13 21:40:33 functional-613120 kubelet[5912]: E1013 21:40:33.509117    5912 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_storage-provisioner_kube-system_43ed88cc-603c-41f3-a3d9-9ad0eea42a63_2\" already exists"
	Oct 13 21:40:33 functional-613120 kubelet[5912]: E1013 21:40:33.509161    5912 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_storage-provisioner_kube-system_43ed88cc-603c-41f3-a3d9-9ad0eea42a63_2\" already exists" pod="kube-system/storage-provisioner"
	Oct 13 21:40:33 functional-613120 kubelet[5912]: E1013 21:40:33.509176    5912 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_storage-provisioner_kube-system_43ed88cc-603c-41f3-a3d9-9ad0eea42a63_2\" already exists" pod="kube-system/storage-provisioner"
	Oct 13 21:40:33 functional-613120 kubelet[5912]: E1013 21:40:33.509219    5912 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"storage-provisioner_kube-system(43ed88cc-603c-41f3-a3d9-9ad0eea42a63)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"storage-provisioner_kube-system(43ed88cc-603c-41f3-a3d9-9ad0eea42a63)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_storage-provisioner_kube-system_43ed88cc-603c-41f3-a3d9-9ad0eea42a63_2\\\" already exists\"" pod="kube-system/storage-provisioner" podUID="43ed88cc-603c-41f3-a3d9-9ad0eea42a63"
	Oct 13 21:40:42 functional-613120 kubelet[5912]: E1013 21:40:42.724058    5912 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760391642723590148  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175579}  inodes_used:{value:87}}"
	Oct 13 21:40:42 functional-613120 kubelet[5912]: E1013 21:40:42.724083    5912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760391642723590148  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175579}  inodes_used:{value:87}}"
	Oct 13 21:40:45 functional-613120 kubelet[5912]: E1013 21:40:45.508919    5912 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-controller-manager-functional-613120_kube-system_79175203d8cb7407956c87ca1d03921b_2\" already exists"
	Oct 13 21:40:45 functional-613120 kubelet[5912]: E1013 21:40:45.508964    5912 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-controller-manager-functional-613120_kube-system_79175203d8cb7407956c87ca1d03921b_2\" already exists" pod="kube-system/kube-controller-manager-functional-613120"
	Oct 13 21:40:45 functional-613120 kubelet[5912]: E1013 21:40:45.508981    5912 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-controller-manager-functional-613120_kube-system_79175203d8cb7407956c87ca1d03921b_2\" already exists" pod="kube-system/kube-controller-manager-functional-613120"
	Oct 13 21:40:45 functional-613120 kubelet[5912]: E1013 21:40:45.509026    5912 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-functional-613120_kube-system(79175203d8cb7407956c87ca1d03921b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-functional-613120_kube-system(79175203d8cb7407956c87ca1d03921b)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-controller-manager-functional-613120_kube-system_79175203d8cb7407956c87ca1d03921b_2\\\" already exists\"" pod="kube-system/kube-controller-manager-functional-613120" podUID="79175203d8cb7407956c87ca1d03921b"
	Oct 13 21:40:47 functional-613120 kubelet[5912]: E1013 21:40:47.508773    5912 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_storage-provisioner_kube-system_43ed88cc-603c-41f3-a3d9-9ad0eea42a63_2\" already exists"
	Oct 13 21:40:47 functional-613120 kubelet[5912]: E1013 21:40:47.508834    5912 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_storage-provisioner_kube-system_43ed88cc-603c-41f3-a3d9-9ad0eea42a63_2\" already exists" pod="kube-system/storage-provisioner"
	Oct 13 21:40:47 functional-613120 kubelet[5912]: E1013 21:40:47.508849    5912 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_storage-provisioner_kube-system_43ed88cc-603c-41f3-a3d9-9ad0eea42a63_2\" already exists" pod="kube-system/storage-provisioner"
	Oct 13 21:40:47 functional-613120 kubelet[5912]: E1013 21:40:47.508888    5912 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"storage-provisioner_kube-system(43ed88cc-603c-41f3-a3d9-9ad0eea42a63)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"storage-provisioner_kube-system(43ed88cc-603c-41f3-a3d9-9ad0eea42a63)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_storage-provisioner_kube-system_43ed88cc-603c-41f3-a3d9-9ad0eea42a63_2\\\" already exists\"" pod="kube-system/storage-provisioner" podUID="43ed88cc-603c-41f3-a3d9-9ad0eea42a63"
	Oct 13 21:40:52 functional-613120 kubelet[5912]: E1013 21:40:52.725755    5912 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760391652725315321  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175579}  inodes_used:{value:87}}"
	Oct 13 21:40:52 functional-613120 kubelet[5912]: E1013 21:40:52.725777    5912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760391652725315321  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175579}  inodes_used:{value:87}}"
	
	
	==> storage-provisioner [5acb4b6ac1a5d2a7093346c8e61251e661880c4d307588f204e6d925ab920c0a] <==
	I1013 21:29:28.693909       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1013 21:29:28.735329       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 21:29:28.735455       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1013 21:29:28.740843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:29:32.201230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:29:36.463171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:29:40.062565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:29:43.116892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:29:46.140605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:29:46.145316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 21:29:46.145520       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1013 21:29:46.145674       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-613120_fd093491-f35d-4ff5-b195-f6f669d79236!
	I1013 21:29:46.146038       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d95083f1-44ef-48bb-916b-20078ba22275", APIVersion:"v1", ResourceVersion:"563", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-613120_fd093491-f35d-4ff5-b195-f6f669d79236 became leader
	W1013 21:29:46.154927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:29:46.158535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 21:29:46.246422       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-613120_fd093491-f35d-4ff5-b195-f6f669d79236!
	W1013 21:29:48.161690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:29:48.167620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:29:50.172966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:29:50.179821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-613120 -n functional-613120
helpers_test.go:269: (dbg) Run:  kubectl --context functional-613120 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-613120 describe pod busybox-mount
helpers_test.go:290: (dbg) kubectl --context functional-613120 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-613120/192.168.39.113
	Start Time:       Mon, 13 Oct 2025 21:39:48 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://79e6030984a0e904e8e9add2aae36132b41cddb0167069e4c35e01ec13f8a0ec
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 13 Oct 2025 21:39:51 +0000
	      Finished:     Mon, 13 Oct 2025 21:39:51 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2zt2x (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-2zt2x:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  65s   default-scheduler  Successfully assigned default/busybox-mount to functional-613120
	  Normal  Pulling    65s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     62s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.352s (2.352s including waiting). Image size: 4631262 bytes.
	  Normal  Created    62s   kubelet            Created container: mount-munger
	  Normal  Started    62s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/DashboardCmd FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/DashboardCmd (302.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-613120 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-613120 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-613120 -n functional-613120
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-13 21:45:53.749291511 +0000 UTC m=+1687.173588567
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-613120 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-613120 describe po hello-node-connect: exit status 1 (70.365168ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "hello-node-connect" not found

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-613120 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-613120 logs -l app=hello-node-connect
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-613120 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.100.177.172
IPs:                      10.100.177.172
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32130/TCP
Endpoints:                <none>
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-613120 -n functional-613120
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-613120 logs -n 25: (1.637703698s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                          ARGS                                                          │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-613120 image ls                                                                                             │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ image          │ functional-613120 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ image          │ functional-613120 image ls                                                                                             │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ image          │ functional-613120 image save --daemon kicbase/echo-server:functional-613120 --alsologtostderr                          │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ ssh            │ functional-613120 ssh sudo cat /etc/test/nested/copy/19947/hosts                                                       │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ ssh            │ functional-613120 ssh sudo cat /etc/ssl/certs/19947.pem                                                                │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ ssh            │ functional-613120 ssh sudo cat /usr/share/ca-certificates/19947.pem                                                    │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ ssh            │ functional-613120 ssh sudo cat /etc/ssl/certs/51391683.0                                                               │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ ssh            │ functional-613120 ssh sudo cat /etc/ssl/certs/199472.pem                                                               │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ ssh            │ functional-613120 ssh sudo cat /usr/share/ca-certificates/199472.pem                                                   │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ ssh            │ functional-613120 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                               │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ image          │ functional-613120 image ls --format short --alsologtostderr                                                            │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ image          │ functional-613120 image ls --format yaml --alsologtostderr                                                             │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ ssh            │ functional-613120 ssh pgrep buildkitd                                                                                  │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │                     │
	│ image          │ functional-613120 image build -t localhost/my-image:functional-613120 testdata/build --alsologtostderr                 │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ image          │ functional-613120 image ls                                                                                             │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ image          │ functional-613120 image ls --format json --alsologtostderr                                                             │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ image          │ functional-613120 image ls --format table --alsologtostderr                                                            │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ update-context │ functional-613120 update-context --alsologtostderr -v=2                                                                │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ update-context │ functional-613120 update-context --alsologtostderr -v=2                                                                │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ update-context │ functional-613120 update-context --alsologtostderr -v=2                                                                │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ service        │ functional-613120 service list                                                                                         │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:45 UTC │ 13 Oct 25 21:45 UTC │
	│ service        │ functional-613120 service list -o json                                                                                 │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:45 UTC │ 13 Oct 25 21:45 UTC │
	│ service        │ functional-613120 service --namespace=default --https --url hello-node                                                 │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:45 UTC │                     │
	│ service        │ functional-613120 service hello-node --url --format={{.IP}}                                                            │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:45 UTC │                     │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 21:35:51
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 21:35:51.510606   28143 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:35:51.510911   28143 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:35:51.510922   28143 out.go:374] Setting ErrFile to fd 2...
	I1013 21:35:51.510927   28143 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:35:51.511106   28143 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-15625/.minikube/bin
	I1013 21:35:51.511546   28143 out.go:368] Setting JSON to false
	I1013 21:35:51.512490   28143 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4699,"bootTime":1760386652,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 21:35:51.512576   28143 start.go:141] virtualization: kvm guest
	I1013 21:35:51.514374   28143 out.go:179] * [functional-613120] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 21:35:51.515688   28143 notify.go:220] Checking for updates...
	I1013 21:35:51.515726   28143 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 21:35:51.517033   28143 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 21:35:51.518361   28143 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-15625/kubeconfig
	I1013 21:35:51.520113   28143 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-15625/.minikube
	I1013 21:35:51.521417   28143 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 21:35:51.522605   28143 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 21:35:51.524328   28143 config.go:182] Loaded profile config "functional-613120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:35:51.524884   28143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:35:51.524955   28143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:35:51.540121   28143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45155
	I1013 21:35:51.540773   28143 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:35:51.541425   28143 main.go:141] libmachine: Using API Version  1
	I1013 21:35:51.541445   28143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:35:51.541933   28143 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:35:51.542144   28143 main.go:141] libmachine: (functional-613120) Calling .DriverName
	I1013 21:35:51.542422   28143 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 21:35:51.542767   28143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:35:51.542806   28143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:35:51.559961   28143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45987
	I1013 21:35:51.560583   28143 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:35:51.561142   28143 main.go:141] libmachine: Using API Version  1
	I1013 21:35:51.561179   28143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:35:51.561531   28143 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:35:51.561724   28143 main.go:141] libmachine: (functional-613120) Calling .DriverName
	I1013 21:35:51.597447   28143 out.go:179] * Using the kvm2 driver based on existing profile
	I1013 21:35:51.598791   28143 start.go:305] selected driver: kvm2
	I1013 21:35:51.598811   28143 start.go:925] validating driver "kvm2" against &{Name:functional-613120 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-613120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.113 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:35:51.598947   28143 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 21:35:51.600183   28143 cni.go:84] Creating CNI manager for ""
	I1013 21:35:51.600254   28143 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1013 21:35:51.600313   28143 start.go:349] cluster config:
	{Name:functional-613120 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-613120 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.113 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:35:51.602828   28143 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 13 21:45:54 functional-613120 crio[5570]: time="2025-10-13 21:45:54.737899479Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760391954737874783,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201240,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b51244f6-d0cf-4683-99b3-e8f1dbf0e4b7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 13 21:45:54 functional-613120 crio[5570]: time="2025-10-13 21:45:54.738916239Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=de4abbec-fca0-40ac-a93a-9c668a0b2b60 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:45:54 functional-613120 crio[5570]: time="2025-10-13 21:45:54.739304312Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=de4abbec-fca0-40ac-a93a-9c668a0b2b60 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:45:54 functional-613120 crio[5570]: time="2025-10-13 21:45:54.739822134Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:79e6030984a0e904e8e9add2aae36132b41cddb0167069e4c35e01ec13f8a0ec,PodSandboxId:a93df8828ff74effd38efe8eeb5659091f56b1a22ee639ad7fc4918ab4900322,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1760391591337665493,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c40a2d9a-c334-4b51-8f13-dd88c18eed33,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e07550cbe1c903805740ae03145586d5dbf3c2e2f5d75935a7f0d7dbd8017074,PodSandboxId:713d35e968dc436254f1c9bbf0844707b99573dab6d494bb87bf55faf2fbb173,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760391098418435883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6vfs9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d88d85-406b-4868-99c7-8ab32c2b31f6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f81ad801f39c2510afbfcfc6d287b87778ab5712e388c9261c4bd1ae4b5574f9,PodSandboxId:8057a650674770b70138555ba30614fc2cc2ecc0d734aadd18368a15c4884201,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760391097978806084,Labels:map[strin
g]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcjdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89be2539-f688-4d2f-b897-965ff79df2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66242d56b190812f28a153441bb55d10bd96659f846105a21d5bdce146835c25,PodSandboxId:e1ecf0fe1c6ec82928d9eae5f2be878362a35243599e34195c81af1d90215b7f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760391093158187758,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522f7dd28b9425c450ca359799bcd7d7,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8df3188b3da91c87564c67437665edcf88864daad57fd6b8755dcaa54607cd9,PodSandboxId:e8679e9415572c2e6f5bedacae057281f3ed15cca43b21e4988db957b7c4d9d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee
9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760391093168647063,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ffd65e64de9ca81f1b2e2c2257415c6,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea72e205f15e9bee347c672920aa3c49cc8c2992554819e66d56891dc5671f80,PodSandboxId:5de8ccaa96a09de423704317c2aa3d0a4bcb42c975d97a64c39243f47fc2778f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760391093133706293,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f324dff40f2879312cdb44fbeb3c82c4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cccca560ba245f49986c6f2272ebfe892f25d04e2e81fb10ba7f7102e95d4f15,PodSandboxId:823c5e67c612e79bd1481ee57b3a12b18dc0fde2660f213723bea2dc6a52629e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924a
a3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760390968666210571,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6vfs9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d88d85-406b-4868-99c7-8ab32c2b31f6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5acb4b6ac1a5d2a7093346c8e61251e661880c4d307588f204e6d925ab920c0a,PodSandboxId:1a5f90e559f5a459645f680193f727145f0496c536bb728c5443d0b58a571e83,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1760390962618532579,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43ed88cc-603c-41f3-a3d9-9ad0eea42a63,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.ter
minationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203bfd3c7945723b3a1f274ccbd4c49bb08bebdfafdd3cfd3d0a62665ed76dac,PodSandboxId:33858ed7ddb02fa41013fcea22864e7ee8564db2287f34817a1d4cab7cd3fdfe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760390962553977066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ffd65e64de9ca81f1b2e2c2257415c6,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b44a46bca9e30d64bfbfd3eae943fafba4b4c1c92fb594f514a3bd091ffd0e87,PodSandboxId:c8f88dad0e67d97049df6d67b5732fa21b9356aa2ba9f89ffe054e6a29e605f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760390962519561589,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcjdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89be2539-f688-4d2f-b897-965ff79df2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26fd846e4671c2a18eaac3936dd0fe9f0081cd0dcee0c249a8c39512f64eb976,PodSandboxId:414c394d4f5087a156cc90d0db1ac29c41a8d2f5517bee2bd7970d5d6e639121,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760390962485245021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f324dff40f2879312cdb44fbeb3c82c4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee4b6616f5b7b67eb758e46052afdb66608eb700a52ac609b016e101814cbc5,PodSandboxId:08cbfc93ded1a60d8a97fe776f9104b7e6367a21117836e3850a5abb5cd6865d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760390962229097708,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79175203d8cb7407956c87ca1d03921b,},Annotations:map[string]string{io.kuberne
tes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=de4abbec-fca0-40ac-a93a-9c668a0b2b60 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:45:54 functional-613120 crio[5570]: time="2025-10-13 21:45:54.798122659Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=52f91236-eb59-428c-b9bc-ef74a755fc28 name=/runtime.v1.RuntimeService/Version
	Oct 13 21:45:54 functional-613120 crio[5570]: time="2025-10-13 21:45:54.798198128Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=52f91236-eb59-428c-b9bc-ef74a755fc28 name=/runtime.v1.RuntimeService/Version
	Oct 13 21:45:54 functional-613120 crio[5570]: time="2025-10-13 21:45:54.799887690Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b6ca509b-807b-42e9-a093-a43d1e1144b5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 13 21:45:54 functional-613120 crio[5570]: time="2025-10-13 21:45:54.801068362Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760391954801036367,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201240,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b6ca509b-807b-42e9-a093-a43d1e1144b5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 13 21:45:54 functional-613120 crio[5570]: time="2025-10-13 21:45:54.802250533Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6efa9b4b-c442-4e79-a1c3-ffbcd41b5086 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:45:54 functional-613120 crio[5570]: time="2025-10-13 21:45:54.802497492Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6efa9b4b-c442-4e79-a1c3-ffbcd41b5086 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:45:54 functional-613120 crio[5570]: time="2025-10-13 21:45:54.802934191Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:79e6030984a0e904e8e9add2aae36132b41cddb0167069e4c35e01ec13f8a0ec,PodSandboxId:a93df8828ff74effd38efe8eeb5659091f56b1a22ee639ad7fc4918ab4900322,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1760391591337665493,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c40a2d9a-c334-4b51-8f13-dd88c18eed33,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e07550cbe1c903805740ae03145586d5dbf3c2e2f5d75935a7f0d7dbd8017074,PodSandboxId:713d35e968dc436254f1c9bbf0844707b99573dab6d494bb87bf55faf2fbb173,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760391098418435883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6vfs9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d88d85-406b-4868-99c7-8ab32c2b31f6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f81ad801f39c2510afbfcfc6d287b87778ab5712e388c9261c4bd1ae4b5574f9,PodSandboxId:8057a650674770b70138555ba30614fc2cc2ecc0d734aadd18368a15c4884201,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760391097978806084,Labels:map[strin
g]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcjdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89be2539-f688-4d2f-b897-965ff79df2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66242d56b190812f28a153441bb55d10bd96659f846105a21d5bdce146835c25,PodSandboxId:e1ecf0fe1c6ec82928d9eae5f2be878362a35243599e34195c81af1d90215b7f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760391093158187758,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522f7dd28b9425c450ca359799bcd7d7,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8df3188b3da91c87564c67437665edcf88864daad57fd6b8755dcaa54607cd9,PodSandboxId:e8679e9415572c2e6f5bedacae057281f3ed15cca43b21e4988db957b7c4d9d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee
9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760391093168647063,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ffd65e64de9ca81f1b2e2c2257415c6,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea72e205f15e9bee347c672920aa3c49cc8c2992554819e66d56891dc5671f80,PodSandboxId:5de8ccaa96a09de423704317c2aa3d0a4bcb42c975d97a64c39243f47fc2778f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760391093133706293,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f324dff40f2879312cdb44fbeb3c82c4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cccca560ba245f49986c6f2272ebfe892f25d04e2e81fb10ba7f7102e95d4f15,PodSandboxId:823c5e67c612e79bd1481ee57b3a12b18dc0fde2660f213723bea2dc6a52629e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924a
a3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760390968666210571,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6vfs9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d88d85-406b-4868-99c7-8ab32c2b31f6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5acb4b6ac1a5d2a7093346c8e61251e661880c4d307588f204e6d925ab920c0a,PodSandboxId:1a5f90e559f5a459645f680193f727145f0496c536bb728c5443d0b58a571e83,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1760390962618532579,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43ed88cc-603c-41f3-a3d9-9ad0eea42a63,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.ter
minationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203bfd3c7945723b3a1f274ccbd4c49bb08bebdfafdd3cfd3d0a62665ed76dac,PodSandboxId:33858ed7ddb02fa41013fcea22864e7ee8564db2287f34817a1d4cab7cd3fdfe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760390962553977066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ffd65e64de9ca81f1b2e2c2257415c6,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b44a46bca9e30d64bfbfd3eae943fafba4b4c1c92fb594f514a3bd091ffd0e87,PodSandboxId:c8f88dad0e67d97049df6d67b5732fa21b9356aa2ba9f89ffe054e6a29e605f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760390962519561589,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcjdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89be2539-f688-4d2f-b897-965ff79df2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26fd846e4671c2a18eaac3936dd0fe9f0081cd0dcee0c249a8c39512f64eb976,PodSandboxId:414c394d4f5087a156cc90d0db1ac29c41a8d2f5517bee2bd7970d5d6e639121,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760390962485245021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f324dff40f2879312cdb44fbeb3c82c4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee4b6616f5b7b67eb758e46052afdb66608eb700a52ac609b016e101814cbc5,PodSandboxId:08cbfc93ded1a60d8a97fe776f9104b7e6367a21117836e3850a5abb5cd6865d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760390962229097708,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79175203d8cb7407956c87ca1d03921b,},Annotations:map[string]string{io.kuberne
tes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6efa9b4b-c442-4e79-a1c3-ffbcd41b5086 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:45:54 functional-613120 crio[5570]: time="2025-10-13 21:45:54.852294404Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4786c0f9-1546-43cc-89e7-93eed47d34d6 name=/runtime.v1.RuntimeService/Version
	Oct 13 21:45:54 functional-613120 crio[5570]: time="2025-10-13 21:45:54.852647126Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4786c0f9-1546-43cc-89e7-93eed47d34d6 name=/runtime.v1.RuntimeService/Version
	Oct 13 21:45:54 functional-613120 crio[5570]: time="2025-10-13 21:45:54.854365224Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4b49d8c3-b159-4fb9-a373-333206f98648 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 13 21:45:54 functional-613120 crio[5570]: time="2025-10-13 21:45:54.855886087Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760391954855859646,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201240,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4b49d8c3-b159-4fb9-a373-333206f98648 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 13 21:45:54 functional-613120 crio[5570]: time="2025-10-13 21:45:54.856973758Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=08ab6709-a2e9-4a0f-a5b2-2d29eaabe660 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:45:54 functional-613120 crio[5570]: time="2025-10-13 21:45:54.857030398Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=08ab6709-a2e9-4a0f-a5b2-2d29eaabe660 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:45:54 functional-613120 crio[5570]: time="2025-10-13 21:45:54.857293654Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:79e6030984a0e904e8e9add2aae36132b41cddb0167069e4c35e01ec13f8a0ec,PodSandboxId:a93df8828ff74effd38efe8eeb5659091f56b1a22ee639ad7fc4918ab4900322,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1760391591337665493,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c40a2d9a-c334-4b51-8f13-dd88c18eed33,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e07550cbe1c903805740ae03145586d5dbf3c2e2f5d75935a7f0d7dbd8017074,PodSandboxId:713d35e968dc436254f1c9bbf0844707b99573dab6d494bb87bf55faf2fbb173,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760391098418435883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6vfs9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d88d85-406b-4868-99c7-8ab32c2b31f6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f81ad801f39c2510afbfcfc6d287b87778ab5712e388c9261c4bd1ae4b5574f9,PodSandboxId:8057a650674770b70138555ba30614fc2cc2ecc0d734aadd18368a15c4884201,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760391097978806084,Labels:map[strin
g]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcjdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89be2539-f688-4d2f-b897-965ff79df2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66242d56b190812f28a153441bb55d10bd96659f846105a21d5bdce146835c25,PodSandboxId:e1ecf0fe1c6ec82928d9eae5f2be878362a35243599e34195c81af1d90215b7f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760391093158187758,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522f7dd28b9425c450ca359799bcd7d7,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8df3188b3da91c87564c67437665edcf88864daad57fd6b8755dcaa54607cd9,PodSandboxId:e8679e9415572c2e6f5bedacae057281f3ed15cca43b21e4988db957b7c4d9d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee
9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760391093168647063,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ffd65e64de9ca81f1b2e2c2257415c6,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea72e205f15e9bee347c672920aa3c49cc8c2992554819e66d56891dc5671f80,PodSandboxId:5de8ccaa96a09de423704317c2aa3d0a4bcb42c975d97a64c39243f47fc2778f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760391093133706293,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f324dff40f2879312cdb44fbeb3c82c4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cccca560ba245f49986c6f2272ebfe892f25d04e2e81fb10ba7f7102e95d4f15,PodSandboxId:823c5e67c612e79bd1481ee57b3a12b18dc0fde2660f213723bea2dc6a52629e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924a
a3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760390968666210571,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6vfs9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d88d85-406b-4868-99c7-8ab32c2b31f6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5acb4b6ac1a5d2a7093346c8e61251e661880c4d307588f204e6d925ab920c0a,PodSandboxId:1a5f90e559f5a459645f680193f727145f0496c536bb728c5443d0b58a571e83,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1760390962618532579,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43ed88cc-603c-41f3-a3d9-9ad0eea42a63,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.ter
minationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203bfd3c7945723b3a1f274ccbd4c49bb08bebdfafdd3cfd3d0a62665ed76dac,PodSandboxId:33858ed7ddb02fa41013fcea22864e7ee8564db2287f34817a1d4cab7cd3fdfe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760390962553977066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ffd65e64de9ca81f1b2e2c2257415c6,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b44a46bca9e30d64bfbfd3eae943fafba4b4c1c92fb594f514a3bd091ffd0e87,PodSandboxId:c8f88dad0e67d97049df6d67b5732fa21b9356aa2ba9f89ffe054e6a29e605f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760390962519561589,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcjdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89be2539-f688-4d2f-b897-965ff79df2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26fd846e4671c2a18eaac3936dd0fe9f0081cd0dcee0c249a8c39512f64eb976,PodSandboxId:414c394d4f5087a156cc90d0db1ac29c41a8d2f5517bee2bd7970d5d6e639121,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760390962485245021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f324dff40f2879312cdb44fbeb3c82c4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee4b6616f5b7b67eb758e46052afdb66608eb700a52ac609b016e101814cbc5,PodSandboxId:08cbfc93ded1a60d8a97fe776f9104b7e6367a21117836e3850a5abb5cd6865d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760390962229097708,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79175203d8cb7407956c87ca1d03921b,},Annotations:map[string]string{io.kuberne
tes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=08ab6709-a2e9-4a0f-a5b2-2d29eaabe660 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:45:54 functional-613120 crio[5570]: time="2025-10-13 21:45:54.910119889Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3b4d85d2-9545-4cb3-9e81-64f045c6e24b name=/runtime.v1.RuntimeService/Version
	Oct 13 21:45:54 functional-613120 crio[5570]: time="2025-10-13 21:45:54.910222743Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3b4d85d2-9545-4cb3-9e81-64f045c6e24b name=/runtime.v1.RuntimeService/Version
	Oct 13 21:45:54 functional-613120 crio[5570]: time="2025-10-13 21:45:54.912218883Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2051853f-da39-4b85-9799-bbc9d10c20d0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 13 21:45:54 functional-613120 crio[5570]: time="2025-10-13 21:45:54.912912610Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760391954912887169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201240,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2051853f-da39-4b85-9799-bbc9d10c20d0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 13 21:45:54 functional-613120 crio[5570]: time="2025-10-13 21:45:54.913601939Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=155558e1-56bc-4d2d-91ba-7dfda0063642 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:45:54 functional-613120 crio[5570]: time="2025-10-13 21:45:54.913679133Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=155558e1-56bc-4d2d-91ba-7dfda0063642 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:45:54 functional-613120 crio[5570]: time="2025-10-13 21:45:54.913907450Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:79e6030984a0e904e8e9add2aae36132b41cddb0167069e4c35e01ec13f8a0ec,PodSandboxId:a93df8828ff74effd38efe8eeb5659091f56b1a22ee639ad7fc4918ab4900322,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1760391591337665493,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c40a2d9a-c334-4b51-8f13-dd88c18eed33,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e07550cbe1c903805740ae03145586d5dbf3c2e2f5d75935a7f0d7dbd8017074,PodSandboxId:713d35e968dc436254f1c9bbf0844707b99573dab6d494bb87bf55faf2fbb173,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760391098418435883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6vfs9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d88d85-406b-4868-99c7-8ab32c2b31f6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f81ad801f39c2510afbfcfc6d287b87778ab5712e388c9261c4bd1ae4b5574f9,PodSandboxId:8057a650674770b70138555ba30614fc2cc2ecc0d734aadd18368a15c4884201,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760391097978806084,Labels:map[strin
g]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcjdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89be2539-f688-4d2f-b897-965ff79df2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66242d56b190812f28a153441bb55d10bd96659f846105a21d5bdce146835c25,PodSandboxId:e1ecf0fe1c6ec82928d9eae5f2be878362a35243599e34195c81af1d90215b7f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760391093158187758,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522f7dd28b9425c450ca359799bcd7d7,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8df3188b3da91c87564c67437665edcf88864daad57fd6b8755dcaa54607cd9,PodSandboxId:e8679e9415572c2e6f5bedacae057281f3ed15cca43b21e4988db957b7c4d9d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee
9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760391093168647063,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ffd65e64de9ca81f1b2e2c2257415c6,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea72e205f15e9bee347c672920aa3c49cc8c2992554819e66d56891dc5671f80,PodSandboxId:5de8ccaa96a09de423704317c2aa3d0a4bcb42c975d97a64c39243f47fc2778f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760391093133706293,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f324dff40f2879312cdb44fbeb3c82c4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cccca560ba245f49986c6f2272ebfe892f25d04e2e81fb10ba7f7102e95d4f15,PodSandboxId:823c5e67c612e79bd1481ee57b3a12b18dc0fde2660f213723bea2dc6a52629e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924a
a3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760390968666210571,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6vfs9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d88d85-406b-4868-99c7-8ab32c2b31f6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5acb4b6ac1a5d2a7093346c8e61251e661880c4d307588f204e6d925ab920c0a,PodSandboxId:1a5f90e559f5a459645f680193f727145f0496c536bb728c5443d0b58a571e83,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1760390962618532579,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43ed88cc-603c-41f3-a3d9-9ad0eea42a63,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.ter
minationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203bfd3c7945723b3a1f274ccbd4c49bb08bebdfafdd3cfd3d0a62665ed76dac,PodSandboxId:33858ed7ddb02fa41013fcea22864e7ee8564db2287f34817a1d4cab7cd3fdfe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760390962553977066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ffd65e64de9ca81f1b2e2c2257415c6,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b44a46bca9e30d64bfbfd3eae943fafba4b4c1c92fb594f514a3bd091ffd0e87,PodSandboxId:c8f88dad0e67d97049df6d67b5732fa21b9356aa2ba9f89ffe054e6a29e605f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760390962519561589,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcjdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89be2539-f688-4d2f-b897-965ff79df2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26fd846e4671c2a18eaac3936dd0fe9f0081cd0dcee0c249a8c39512f64eb976,PodSandboxId:414c394d4f5087a156cc90d0db1ac29c41a8d2f5517bee2bd7970d5d6e639121,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760390962485245021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f324dff40f2879312cdb44fbeb3c82c4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee4b6616f5b7b67eb758e46052afdb66608eb700a52ac609b016e101814cbc5,PodSandboxId:08cbfc93ded1a60d8a97fe776f9104b7e6367a21117836e3850a5abb5cd6865d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760390962229097708,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79175203d8cb7407956c87ca1d03921b,},Annotations:map[string]string{io.kuberne
tes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=155558e1-56bc-4d2d-91ba-7dfda0063642 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	79e6030984a0e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   6 minutes ago       Exited              mount-munger              0                   a93df8828ff74       busybox-mount
	e07550cbe1c90       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      14 minutes ago      Running             coredns                   2                   713d35e968dc4       coredns-66bc5c9577-6vfs9
	f81ad801f39c2       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      14 minutes ago      Running             kube-proxy                2                   8057a65067477       kube-proxy-kcjdv
	c8df3188b3da9       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      14 minutes ago      Running             kube-scheduler            2                   e8679e9415572       kube-scheduler-functional-613120
	66242d56b1908       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      14 minutes ago      Running             kube-apiserver            0                   e1ecf0fe1c6ec       kube-apiserver-functional-613120
	ea72e205f15e9       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      14 minutes ago      Running             etcd                      2                   5de8ccaa96a09       etcd-functional-613120
	cccca560ba245       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      16 minutes ago      Exited              coredns                   1                   823c5e67c612e       coredns-66bc5c9577-6vfs9
	5acb4b6ac1a5d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      16 minutes ago      Exited              storage-provisioner       1                   1a5f90e559f5a       storage-provisioner
	203bfd3c79457       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      16 minutes ago      Exited              kube-scheduler            1                   33858ed7ddb02       kube-scheduler-functional-613120
	b44a46bca9e30       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      16 minutes ago      Exited              kube-proxy                1                   c8f88dad0e67d       kube-proxy-kcjdv
	26fd846e4671c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      16 minutes ago      Exited              etcd                      1                   414c394d4f508       etcd-functional-613120
	6ee4b6616f5b7       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      16 minutes ago      Exited              kube-controller-manager   1                   08cbfc93ded1a       kube-controller-manager-functional-613120
	
	
	==> coredns [cccca560ba245f49986c6f2272ebfe892f25d04e2e81fb10ba7f7102e95d4f15] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42456 - 27503 "HINFO IN 7724600574698421821.8526601767399151930. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.049782112s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e07550cbe1c903805740ae03145586d5dbf3c2e2f5d75935a7f0d7dbd8017074] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55800 - 5148 "HINFO IN 1677522805808455981.5721645023815659663. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.036495987s
	
	
	==> describe nodes <==
	Name:               functional-613120
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-613120
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=functional-613120
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T21_28_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 21:28:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-613120
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 21:45:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 21:41:07 +0000   Mon, 13 Oct 2025 21:28:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 21:41:07 +0000   Mon, 13 Oct 2025 21:28:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 21:41:07 +0000   Mon, 13 Oct 2025 21:28:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 21:41:07 +0000   Mon, 13 Oct 2025 21:28:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.113
	  Hostname:    functional-613120
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 16a6c3b9eff6414082874fcb18b5974c
	  System UUID:                16a6c3b9-eff6-4140-8287-4fcb18b5974c
	  Boot ID:                    08d1688d-f04d-4990-8e88-f64344bac422
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-6vfs9                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     17m
	  kube-system                 etcd-functional-613120                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         17m
	  kube-system                 kube-apiserver-functional-613120             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-functional-613120    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-kcjdv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-functional-613120             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  Starting                 17m                kube-proxy       
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node functional-613120 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node functional-613120 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node functional-613120 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    17m                kubelet          Node functional-613120 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  17m                kubelet          Node functional-613120 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     17m                kubelet          Node functional-613120 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeReady                17m                kubelet          Node functional-613120 status is now: NodeReady
	  Normal  RegisteredNode           17m                node-controller  Node functional-613120 event: Registered Node functional-613120 in Controller
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node functional-613120 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node functional-613120 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node functional-613120 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16m                node-controller  Node functional-613120 event: Registered Node functional-613120 in Controller
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node functional-613120 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node functional-613120 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node functional-613120 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	
	
	==> dmesg <==
	[Oct13 21:27] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000064] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.008604] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[Oct13 21:28] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000021] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.088305] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.096018] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.147837] kauditd_printk_skb: 171 callbacks suppressed
	[  +5.872269] kauditd_printk_skb: 18 callbacks suppressed
	[ +11.344503] kauditd_printk_skb: 243 callbacks suppressed
	[Oct13 21:29] kauditd_printk_skb: 38 callbacks suppressed
	[  +7.696503] kauditd_printk_skb: 56 callbacks suppressed
	[  +4.771558] kauditd_printk_skb: 290 callbacks suppressed
	[ +14.216137] kauditd_printk_skb: 23 callbacks suppressed
	[ +13.213321] kauditd_printk_skb: 12 callbacks suppressed
	[Oct13 21:31] kauditd_printk_skb: 209 callbacks suppressed
	[  +5.599027] kauditd_printk_skb: 153 callbacks suppressed
	[Oct13 21:32] kauditd_printk_skb: 98 callbacks suppressed
	[Oct13 21:35] kauditd_printk_skb: 16 callbacks suppressed
	[  +3.169795] kauditd_printk_skb: 63 callbacks suppressed
	[Oct13 21:39] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.987661] kauditd_printk_skb: 59 callbacks suppressed
	[Oct13 21:40] crun[9003]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	
	
	==> etcd [26fd846e4671c2a18eaac3936dd0fe9f0081cd0dcee0c249a8c39512f64eb976] <==
	{"level":"warn","ts":"2025-10-13T21:29:26.617665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:29:26.623351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:29:26.631487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:29:26.641884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:29:26.655241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:29:26.658347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:29:26.717551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56654","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-13T21:29:50.857092Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-13T21:29:50.857237Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-613120","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.113:2380"],"advertise-client-urls":["https://192.168.39.113:2379"]}
	{"level":"error","ts":"2025-10-13T21:29:50.857324Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-13T21:29:50.938243Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-13T21:29:50.938335Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T21:29:50.938357Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8069059f79d446ff","current-leader-member-id":"8069059f79d446ff"}
	{"level":"info","ts":"2025-10-13T21:29:50.938508Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-13T21:29:50.938555Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-13T21:29:50.938568Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-13T21:29:50.938627Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-13T21:29:50.938633Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-13T21:29:50.938667Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.113:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-13T21:29:50.938674Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.113:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-13T21:29:50.938679Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.113:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T21:29:50.942024Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.113:2380"}
	{"level":"error","ts":"2025-10-13T21:29:50.942099Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.113:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T21:29:50.942123Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.113:2380"}
	{"level":"info","ts":"2025-10-13T21:29:50.942129Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-613120","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.113:2380"],"advertise-client-urls":["https://192.168.39.113:2379"]}
	
	
	==> etcd [ea72e205f15e9bee347c672920aa3c49cc8c2992554819e66d56891dc5671f80] <==
	{"level":"warn","ts":"2025-10-13T21:31:34.957914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:34.964462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:34.970099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:34.979868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:34.985236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:34.996943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.011573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.017606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.025865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.035148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.043812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.052613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.062810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.076994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.083661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.091500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.098948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.109285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.119002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.129117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.135664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.209541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44616","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-13T21:41:34.423310Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":870}
	{"level":"info","ts":"2025-10-13T21:41:34.433434Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":870,"took":"9.671064ms","hash":3328083090,"current-db-size-bytes":2162688,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2162688,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2025-10-13T21:41:34.433507Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3328083090,"revision":870,"compact-revision":-1}
	
	
	==> kernel <==
	 21:45:55 up 18 min,  0 users,  load average: 0.03, 0.21, 0.33
	Linux functional-613120 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [66242d56b190812f28a153441bb55d10bd96659f846105a21d5bdce146835c25] <==
	I1013 21:31:35.989941       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1013 21:31:35.994336       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1013 21:31:35.996139       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1013 21:31:35.996251       1 aggregator.go:171] initial CRD sync complete...
	I1013 21:31:35.996323       1 autoregister_controller.go:144] Starting autoregister controller
	I1013 21:31:35.996330       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 21:31:35.996335       1 cache.go:39] Caches are synced for autoregister controller
	I1013 21:31:35.998223       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1013 21:31:36.001759       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1013 21:31:36.015526       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1013 21:31:36.016269       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1013 21:31:36.790607       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 21:31:37.524043       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 21:31:37.700261       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 21:31:37.777593       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1013 21:31:37.846439       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 21:31:37.863830       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 21:35:46.309636       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.97.121.66"}
	I1013 21:35:50.986537       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.180.30"}
	I1013 21:35:52.706618       1 controller.go:667] quota admission added evaluator for: namespaces
	I1013 21:35:52.877867       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.45.50"}
	I1013 21:35:52.901108       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.97.166"}
	I1013 21:35:53.478919       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.100.177.172"}
	I1013 21:40:06.127459       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.109.19.102"}
	I1013 21:41:35.918476       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [6ee4b6616f5b7b67eb758e46052afdb66608eb700a52ac609b016e101814cbc5] <==
	I1013 21:29:30.142439       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 21:29:30.142680       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1013 21:29:30.142877       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 21:29:30.142688       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1013 21:29:30.142705       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1013 21:29:30.142939       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-613120"
	I1013 21:29:30.143010       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1013 21:29:30.142698       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1013 21:29:30.142711       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1013 21:29:30.143817       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 21:29:30.144966       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1013 21:29:30.146454       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1013 21:29:30.147790       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 21:29:30.154058       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1013 21:29:30.160261       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1013 21:29:30.163983       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1013 21:29:30.167909       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 21:29:30.172471       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 21:29:30.177559       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 21:29:30.178764       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 21:29:30.204417       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 21:29:30.204466       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 21:29:30.204473       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 21:29:30.208646       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 21:29:30.209816       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [b44a46bca9e30d64bfbfd3eae943fafba4b4c1c92fb594f514a3bd091ffd0e87] <==
	I1013 21:29:28.884906       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 21:29:28.988240       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 21:29:28.988284       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.113"]
	E1013 21:29:28.988364       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 21:29:29.063938       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1013 21:29:29.064138       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1013 21:29:29.064231       1 server_linux.go:132] "Using iptables Proxier"
	I1013 21:29:29.075846       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 21:29:29.077723       1 server.go:527] "Version info" version="v1.34.1"
	I1013 21:29:29.077739       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 21:29:29.084538       1 config.go:200] "Starting service config controller"
	I1013 21:29:29.084550       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 21:29:29.084569       1 config.go:106] "Starting endpoint slice config controller"
	I1013 21:29:29.084574       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 21:29:29.084584       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 21:29:29.084587       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 21:29:29.085167       1 config.go:309] "Starting node config controller"
	I1013 21:29:29.085173       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 21:29:29.085178       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 21:29:29.185174       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 21:29:29.185311       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 21:29:29.185334       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [f81ad801f39c2510afbfcfc6d287b87778ab5712e388c9261c4bd1ae4b5574f9] <==
	I1013 21:31:38.397204       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 21:31:38.498253       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 21:31:38.498309       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.113"]
	E1013 21:31:38.498450       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 21:31:38.666515       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1013 21:31:38.666590       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1013 21:31:38.666620       1 server_linux.go:132] "Using iptables Proxier"
	I1013 21:31:38.681690       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 21:31:38.682623       1 server.go:527] "Version info" version="v1.34.1"
	I1013 21:31:38.682729       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 21:31:38.697128       1 config.go:309] "Starting node config controller"
	I1013 21:31:38.697163       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 21:31:38.697169       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 21:31:38.697425       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 21:31:38.697433       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 21:31:38.697497       1 config.go:200] "Starting service config controller"
	I1013 21:31:38.697501       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 21:31:38.697512       1 config.go:106] "Starting endpoint slice config controller"
	I1013 21:31:38.697515       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 21:31:38.798520       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 21:31:38.798568       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 21:31:38.799526       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [203bfd3c7945723b3a1f274ccbd4c49bb08bebdfafdd3cfd3d0a62665ed76dac] <==
	I1013 21:29:25.656553       1 serving.go:386] Generated self-signed cert in-memory
	W1013 21:29:27.329435       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1013 21:29:27.329481       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1013 21:29:27.329491       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1013 21:29:27.329497       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1013 21:29:27.423794       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 21:29:27.423893       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 21:29:27.435704       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 21:29:27.435833       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 21:29:27.435914       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:29:27.435925       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:29:27.537585       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:29:50.858932       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1013 21:29:50.858967       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1013 21:29:50.859022       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1013 21:29:50.859063       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:29:50.859302       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1013 21:29:50.859326       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c8df3188b3da91c87564c67437665edcf88864daad57fd6b8755dcaa54607cd9] <==
	I1013 21:31:34.411646       1 serving.go:386] Generated self-signed cert in-memory
	W1013 21:31:35.836891       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1013 21:31:35.837475       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1013 21:31:35.837536       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1013 21:31:35.837555       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1013 21:31:35.902134       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 21:31:35.902174       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 21:31:35.907484       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:31:35.907564       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:31:35.914613       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 21:31:35.911364       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E1013 21:31:35.924726       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1013 21:31:35.931219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1013 21:31:35.934801       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 21:31:35.937059       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 21:31:35.937223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 21:31:35.937291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 21:31:35.937354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1013 21:31:36.808054       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 21:45:32 functional-613120 kubelet[5912]: E1013 21:45:32.510466    5912 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-functional-613120_kube-system(79175203d8cb7407956c87ca1d03921b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-functional-613120_kube-system(79175203d8cb7407956c87ca1d03921b)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-controller-manager-functional-613120_kube-system_79175203d8cb7407956c87ca1d03921b_2\\\" already exists\"" pod="kube-system/kube-controller-manager-functional-613120" podUID="79175203d8cb7407956c87ca1d03921b"
	Oct 13 21:45:32 functional-613120 kubelet[5912]: E1013 21:45:32.579796    5912 manager.go:1116] Failed to create existing container: /kubepods/burstable/podf324dff40f2879312cdb44fbeb3c82c4/crio-414c394d4f5087a156cc90d0db1ac29c41a8d2f5517bee2bd7970d5d6e639121: Error finding container 414c394d4f5087a156cc90d0db1ac29c41a8d2f5517bee2bd7970d5d6e639121: Status 404 returned error can't find the container with id 414c394d4f5087a156cc90d0db1ac29c41a8d2f5517bee2bd7970d5d6e639121
	Oct 13 21:45:32 functional-613120 kubelet[5912]: E1013 21:45:32.580608    5912 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod5ffd65e64de9ca81f1b2e2c2257415c6/crio-33858ed7ddb02fa41013fcea22864e7ee8564db2287f34817a1d4cab7cd3fdfe: Error finding container 33858ed7ddb02fa41013fcea22864e7ee8564db2287f34817a1d4cab7cd3fdfe: Status 404 returned error can't find the container with id 33858ed7ddb02fa41013fcea22864e7ee8564db2287f34817a1d4cab7cd3fdfe
	Oct 13 21:45:32 functional-613120 kubelet[5912]: E1013 21:45:32.580946    5912 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod79175203d8cb7407956c87ca1d03921b/crio-08cbfc93ded1a60d8a97fe776f9104b7e6367a21117836e3850a5abb5cd6865d: Error finding container 08cbfc93ded1a60d8a97fe776f9104b7e6367a21117836e3850a5abb5cd6865d: Status 404 returned error can't find the container with id 08cbfc93ded1a60d8a97fe776f9104b7e6367a21117836e3850a5abb5cd6865d
	Oct 13 21:45:32 functional-613120 kubelet[5912]: E1013 21:45:32.581290    5912 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod43ed88cc-603c-41f3-a3d9-9ad0eea42a63/crio-1a5f90e559f5a459645f680193f727145f0496c536bb728c5443d0b58a571e83: Error finding container 1a5f90e559f5a459645f680193f727145f0496c536bb728c5443d0b58a571e83: Status 404 returned error can't find the container with id 1a5f90e559f5a459645f680193f727145f0496c536bb728c5443d0b58a571e83
	Oct 13 21:45:32 functional-613120 kubelet[5912]: E1013 21:45:32.581666    5912 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod89be2539-f688-4d2f-b897-965ff79df2fb/crio-c8f88dad0e67d97049df6d67b5732fa21b9356aa2ba9f89ffe054e6a29e605f9: Error finding container c8f88dad0e67d97049df6d67b5732fa21b9356aa2ba9f89ffe054e6a29e605f9: Status 404 returned error can't find the container with id c8f88dad0e67d97049df6d67b5732fa21b9356aa2ba9f89ffe054e6a29e605f9
	Oct 13 21:45:32 functional-613120 kubelet[5912]: E1013 21:45:32.582052    5912 manager.go:1116] Failed to create existing container: /kubepods/burstable/podc6d88d85-406b-4868-99c7-8ab32c2b31f6/crio-823c5e67c612e79bd1481ee57b3a12b18dc0fde2660f213723bea2dc6a52629e: Error finding container 823c5e67c612e79bd1481ee57b3a12b18dc0fde2660f213723bea2dc6a52629e: Status 404 returned error can't find the container with id 823c5e67c612e79bd1481ee57b3a12b18dc0fde2660f213723bea2dc6a52629e
	Oct 13 21:45:32 functional-613120 kubelet[5912]: E1013 21:45:32.804156    5912 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760391932803106136  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Oct 13 21:45:32 functional-613120 kubelet[5912]: E1013 21:45:32.804179    5912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760391932803106136  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Oct 13 21:45:35 functional-613120 kubelet[5912]: E1013 21:45:35.508888    5912 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_storage-provisioner_kube-system_43ed88cc-603c-41f3-a3d9-9ad0eea42a63_2\" already exists"
	Oct 13 21:45:35 functional-613120 kubelet[5912]: E1013 21:45:35.508932    5912 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_storage-provisioner_kube-system_43ed88cc-603c-41f3-a3d9-9ad0eea42a63_2\" already exists" pod="kube-system/storage-provisioner"
	Oct 13 21:45:35 functional-613120 kubelet[5912]: E1013 21:45:35.508950    5912 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_storage-provisioner_kube-system_43ed88cc-603c-41f3-a3d9-9ad0eea42a63_2\" already exists" pod="kube-system/storage-provisioner"
	Oct 13 21:45:35 functional-613120 kubelet[5912]: E1013 21:45:35.508999    5912 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"storage-provisioner_kube-system(43ed88cc-603c-41f3-a3d9-9ad0eea42a63)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"storage-provisioner_kube-system(43ed88cc-603c-41f3-a3d9-9ad0eea42a63)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_storage-provisioner_kube-system_43ed88cc-603c-41f3-a3d9-9ad0eea42a63_2\\\" already exists\"" pod="kube-system/storage-provisioner" podUID="43ed88cc-603c-41f3-a3d9-9ad0eea42a63"
	Oct 13 21:45:42 functional-613120 kubelet[5912]: E1013 21:45:42.805930    5912 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760391942805459100  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Oct 13 21:45:42 functional-613120 kubelet[5912]: E1013 21:45:42.805952    5912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760391942805459100  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Oct 13 21:45:45 functional-613120 kubelet[5912]: E1013 21:45:45.509957    5912 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-controller-manager-functional-613120_kube-system_79175203d8cb7407956c87ca1d03921b_2\" already exists"
	Oct 13 21:45:45 functional-613120 kubelet[5912]: E1013 21:45:45.510156    5912 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-controller-manager-functional-613120_kube-system_79175203d8cb7407956c87ca1d03921b_2\" already exists" pod="kube-system/kube-controller-manager-functional-613120"
	Oct 13 21:45:45 functional-613120 kubelet[5912]: E1013 21:45:45.510177    5912 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-controller-manager-functional-613120_kube-system_79175203d8cb7407956c87ca1d03921b_2\" already exists" pod="kube-system/kube-controller-manager-functional-613120"
	Oct 13 21:45:45 functional-613120 kubelet[5912]: E1013 21:45:45.510249    5912 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-functional-613120_kube-system(79175203d8cb7407956c87ca1d03921b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-functional-613120_kube-system(79175203d8cb7407956c87ca1d03921b)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-controller-manager-functional-613120_kube-system_79175203d8cb7407956c87ca1d03921b_2\\\" already exists\"" pod="kube-system/kube-controller-manager-functional-613120" podUID="79175203d8cb7407956c87ca1d03921b"
	Oct 13 21:45:50 functional-613120 kubelet[5912]: E1013 21:45:50.510730    5912 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_storage-provisioner_kube-system_43ed88cc-603c-41f3-a3d9-9ad0eea42a63_2\" already exists"
	Oct 13 21:45:50 functional-613120 kubelet[5912]: E1013 21:45:50.510775    5912 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_storage-provisioner_kube-system_43ed88cc-603c-41f3-a3d9-9ad0eea42a63_2\" already exists" pod="kube-system/storage-provisioner"
	Oct 13 21:45:50 functional-613120 kubelet[5912]: E1013 21:45:50.510791    5912 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_storage-provisioner_kube-system_43ed88cc-603c-41f3-a3d9-9ad0eea42a63_2\" already exists" pod="kube-system/storage-provisioner"
	Oct 13 21:45:50 functional-613120 kubelet[5912]: E1013 21:45:50.510835    5912 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"storage-provisioner_kube-system(43ed88cc-603c-41f3-a3d9-9ad0eea42a63)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"storage-provisioner_kube-system(43ed88cc-603c-41f3-a3d9-9ad0eea42a63)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_storage-provisioner_kube-system_43ed88cc-603c-41f3-a3d9-9ad0eea42a63_2\\\" already exists\"" pod="kube-system/storage-provisioner" podUID="43ed88cc-603c-41f3-a3d9-9ad0eea42a63"
	Oct 13 21:45:52 functional-613120 kubelet[5912]: E1013 21:45:52.809306    5912 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760391952808725736  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Oct 13 21:45:52 functional-613120 kubelet[5912]: E1013 21:45:52.809325    5912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760391952808725736  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	
	
	==> storage-provisioner [5acb4b6ac1a5d2a7093346c8e61251e661880c4d307588f204e6d925ab920c0a] <==
	I1013 21:29:28.693909       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1013 21:29:28.735329       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 21:29:28.735455       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1013 21:29:28.740843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:29:32.201230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:29:36.463171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:29:40.062565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:29:43.116892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:29:46.140605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:29:46.145316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 21:29:46.145520       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1013 21:29:46.145674       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-613120_fd093491-f35d-4ff5-b195-f6f669d79236!
	I1013 21:29:46.146038       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d95083f1-44ef-48bb-916b-20078ba22275", APIVersion:"v1", ResourceVersion:"563", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-613120_fd093491-f35d-4ff5-b195-f6f669d79236 became leader
	W1013 21:29:46.154927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:29:46.158535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 21:29:46.246422       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-613120_fd093491-f35d-4ff5-b195-f6f669d79236!
	W1013 21:29:48.161690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:29:48.167620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:29:50.172966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:29:50.179821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-613120 -n functional-613120
helpers_test.go:269: (dbg) Run:  kubectl --context functional-613120 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-613120 describe pod busybox-mount
helpers_test.go:290: (dbg) kubectl --context functional-613120 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-613120/192.168.39.113
	Start Time:       Mon, 13 Oct 2025 21:39:48 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://79e6030984a0e904e8e9add2aae36132b41cddb0167069e4c35e01ec13f8a0ec
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 13 Oct 2025 21:39:51 +0000
	      Finished:     Mon, 13 Oct 2025 21:39:51 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2zt2x (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-2zt2x:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  6m8s  default-scheduler  Successfully assigned default/busybox-mount to functional-613120
	  Normal  Pulling    6m8s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     6m5s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.352s (2.352s including waiting). Image size: 4631262 bytes.
	  Normal  Created    6m5s  kubelet            Created container: mount-munger
	  Normal  Started    6m5s  kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.98s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (234.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [43ed88cc-603c-41f3-a3d9-9ad0eea42a63] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004752055s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-613120 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-613120 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-613120 get pvc myclaim -o=json
I1013 21:35:57.495642   19947 retry.go:31] will retry after 1.834674386s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:125b5941-d53f-4de6-8588-071fc9627fe8 ResourceVersion:852 Generation:0 CreationTimestamp:2025-10-13 21:35:57 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001861790 VolumeMode:0xc0018617a0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-613120 get pvc myclaim -o=json
I1013 21:35:59.387986   19947 retry.go:31] will retry after 3.819224767s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:125b5941-d53f-4de6-8588-071fc9627fe8 ResourceVersion:852 Generation:0 CreationTimestamp:2025-10-13 21:35:57 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc002031630 VolumeMode:0xc002031640 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-613120 get pvc myclaim -o=json
I1013 21:36:03.262024   19947 retry.go:31] will retry after 4.01055352s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:125b5941-d53f-4de6-8588-071fc9627fe8 ResourceVersion:852 Generation:0 CreationTimestamp:2025-10-13 21:35:57 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc002031f20 VolumeMode:0xc002031f30 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-613120 get pvc myclaim -o=json
I1013 21:36:07.327416   19947 retry.go:31] will retry after 6.725760157s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:125b5941-d53f-4de6-8588-071fc9627fe8 ResourceVersion:852 Generation:0 CreationTimestamp:2025-10-13 21:35:57 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001b0a6d0 VolumeMode:0xc001b0a6e0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-613120 get pvc myclaim -o=json
I1013 21:36:14.109264   19947 retry.go:31] will retry after 5.973186592s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:125b5941-d53f-4de6-8588-071fc9627fe8 ResourceVersion:852 Generation:0 CreationTimestamp:2025-10-13 21:35:57 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0017615d0 VolumeMode:0xc0017615e0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-613120 get pvc myclaim -o=json
I1013 21:36:20.140959   19947 retry.go:31] will retry after 8.696437973s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:125b5941-d53f-4de6-8588-071fc9627fe8 ResourceVersion:852 Generation:0 CreationTimestamp:2025-10-13 21:35:57 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001b0bdb0 VolumeMode:0xc001b0bdc0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-613120 get pvc myclaim -o=json
I1013 21:36:28.893418   19947 retry.go:31] will retry after 13.586146267s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:125b5941-d53f-4de6-8588-071fc9627fe8 ResourceVersion:852 Generation:0 CreationTimestamp:2025-10-13 21:35:57 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001b3d490 VolumeMode:0xc001b3d4a0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-613120 get pvc myclaim -o=json
I1013 21:36:42.537843   19947 retry.go:31] will retry after 33.340055744s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:125b5941-d53f-4de6-8588-071fc9627fe8 ResourceVersion:852 Generation:0 CreationTimestamp:2025-10-13 21:35:57 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001c254a0 VolumeMode:0xc001c254b0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-613120 get pvc myclaim -o=json
I1013 21:37:15.937091   19947 retry.go:31] will retry after 48.878546156s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:125b5941-d53f-4de6-8588-071fc9627fe8 ResourceVersion:852 Generation:0 CreationTimestamp:2025-10-13 21:35:57 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001e20890 VolumeMode:0xc001e208a0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-613120 get pvc myclaim -o=json
I1013 21:38:04.872739   19947 retry.go:31] will retry after 56.681142756s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:125b5941-d53f-4de6-8588-071fc9627fe8 ResourceVersion:852 Generation:0 CreationTimestamp:2025-10-13 21:35:57 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc002259be0 VolumeMode:0xc002259bf0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-613120 get pvc myclaim -o=json
I1013 21:39:01.612126   19947 retry.go:31] will retry after 43.313572112s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:125b5941-d53f-4de6-8588-071fc9627fe8 ResourceVersion:852 Generation:0 CreationTimestamp:2025-10-13 21:35:57 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001604b70 VolumeMode:0xc001604b80 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-613120 get pvc myclaim -o=json
functional_test_pvc_test.go:98: failed to check storage phase: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:125b5941-d53f-4de6-8588-071fc9627fe8 ResourceVersion:852 Generation:0 CreationTimestamp:2025-10-13 21:35:57 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001722090 VolumeMode:0xc0017220a0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-613120 -n functional-613120
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-613120 logs -n 25: (1.397912561s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                ARGS                                                                 │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache     │ delete registry.k8s.io/pause:3.1                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 13 Oct 25 21:29 UTC │ 13 Oct 25 21:29 UTC │
	│ cache     │ delete registry.k8s.io/pause:latest                                                                                                 │ minikube          │ jenkins │ v1.37.0 │ 13 Oct 25 21:29 UTC │ 13 Oct 25 21:29 UTC │
	│ kubectl   │ functional-613120 kubectl -- --context functional-613120 get pods                                                                   │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:29 UTC │ 13 Oct 25 21:29 UTC │
	│ start     │ -p functional-613120 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                            │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:29 UTC │                     │
	│ service   │ invalid-svc -p functional-613120                                                                                                    │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:35 UTC │                     │
	│ cp        │ functional-613120 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                  │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:35 UTC │ 13 Oct 25 21:35 UTC │
	│ config    │ functional-613120 config unset cpus                                                                                                 │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:35 UTC │ 13 Oct 25 21:35 UTC │
	│ config    │ functional-613120 config get cpus                                                                                                   │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:35 UTC │                     │
	│ config    │ functional-613120 config set cpus 2                                                                                                 │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:35 UTC │ 13 Oct 25 21:35 UTC │
	│ config    │ functional-613120 config get cpus                                                                                                   │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:35 UTC │ 13 Oct 25 21:35 UTC │
	│ config    │ functional-613120 config unset cpus                                                                                                 │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:35 UTC │ 13 Oct 25 21:35 UTC │
	│ ssh       │ functional-613120 ssh -n functional-613120 sudo cat /home/docker/cp-test.txt                                                        │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:35 UTC │ 13 Oct 25 21:35 UTC │
	│ config    │ functional-613120 config get cpus                                                                                                   │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:35 UTC │                     │
	│ start     │ -p functional-613120 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:35 UTC │                     │
	│ cp        │ functional-613120 cp functional-613120:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2602273253/001/cp-test.txt          │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:35 UTC │ 13 Oct 25 21:35 UTC │
	│ start     │ -p functional-613120 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:35 UTC │                     │
	│ start     │ -p functional-613120 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false           │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:35 UTC │                     │
	│ ssh       │ functional-613120 ssh -n functional-613120 sudo cat /home/docker/cp-test.txt                                                        │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:35 UTC │ 13 Oct 25 21:35 UTC │
	│ dashboard │ --url --port 36195 -p functional-613120 --alsologtostderr -v=1                                                                      │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:35 UTC │                     │
	│ cp        │ functional-613120 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                           │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:35 UTC │ 13 Oct 25 21:35 UTC │
	│ addons    │ functional-613120 addons list                                                                                                       │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:35 UTC │ 13 Oct 25 21:35 UTC │
	│ addons    │ functional-613120 addons list -o json                                                                                               │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:35 UTC │ 13 Oct 25 21:35 UTC │
	│ ssh       │ functional-613120 ssh echo hello                                                                                                    │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:35 UTC │ 13 Oct 25 21:35 UTC │
	│ ssh       │ functional-613120 ssh -n functional-613120 sudo cat /tmp/does/not/exist/cp-test.txt                                                 │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:35 UTC │ 13 Oct 25 21:35 UTC │
	│ ssh       │ functional-613120 ssh cat /etc/hostname                                                                                             │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:35 UTC │ 13 Oct 25 21:35 UTC │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 21:35:51
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 21:35:51.510606   28143 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:35:51.510911   28143 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:35:51.510922   28143 out.go:374] Setting ErrFile to fd 2...
	I1013 21:35:51.510927   28143 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:35:51.511106   28143 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-15625/.minikube/bin
	I1013 21:35:51.511546   28143 out.go:368] Setting JSON to false
	I1013 21:35:51.512490   28143 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4699,"bootTime":1760386652,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 21:35:51.512576   28143 start.go:141] virtualization: kvm guest
	I1013 21:35:51.514374   28143 out.go:179] * [functional-613120] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 21:35:51.515688   28143 notify.go:220] Checking for updates...
	I1013 21:35:51.515726   28143 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 21:35:51.517033   28143 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 21:35:51.518361   28143 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-15625/kubeconfig
	I1013 21:35:51.520113   28143 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-15625/.minikube
	I1013 21:35:51.521417   28143 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 21:35:51.522605   28143 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 21:35:51.524328   28143 config.go:182] Loaded profile config "functional-613120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:35:51.524884   28143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:35:51.524955   28143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:35:51.540121   28143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45155
	I1013 21:35:51.540773   28143 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:35:51.541425   28143 main.go:141] libmachine: Using API Version  1
	I1013 21:35:51.541445   28143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:35:51.541933   28143 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:35:51.542144   28143 main.go:141] libmachine: (functional-613120) Calling .DriverName
	I1013 21:35:51.542422   28143 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 21:35:51.542767   28143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:35:51.542806   28143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:35:51.559961   28143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45987
	I1013 21:35:51.560583   28143 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:35:51.561142   28143 main.go:141] libmachine: Using API Version  1
	I1013 21:35:51.561179   28143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:35:51.561531   28143 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:35:51.561724   28143 main.go:141] libmachine: (functional-613120) Calling .DriverName
	I1013 21:35:51.597447   28143 out.go:179] * Using the kvm2 driver based on existing profile
	I1013 21:35:51.598791   28143 start.go:305] selected driver: kvm2
	I1013 21:35:51.598811   28143 start.go:925] validating driver "kvm2" against &{Name:functional-613120 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-613120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.113 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:35:51.598947   28143 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 21:35:51.600183   28143 cni.go:84] Creating CNI manager for ""
	I1013 21:35:51.600254   28143 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1013 21:35:51.600313   28143 start.go:349] cluster config:
	{Name:functional-613120 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-613120 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.113 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:35:51.602828   28143 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 13 21:39:45 functional-613120 crio[5570]: time="2025-10-13 21:39:45.722056460Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:713d35e968dc436254f1c9bbf0844707b99573dab6d494bb87bf55faf2fbb173,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-6vfs9,Uid:c6d88d85-406b-4868-99c7-8ab32c2b31f6,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1760391097959965301,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-6vfs9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d88d85-406b-4868-99c7-8ab32c2b31f6,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-13T21:31:37.437890061Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8057a650674770b70138555ba30614fc2cc2ecc0d734aadd18368a15c4884201,Metadata:&PodSandboxMetadata{Name:kube-proxy-kcjdv,Uid:89be2539-f688-4d2f-b897-965ff79df2fb,Namespace:kube-system,At
tempt:3,},State:SANDBOX_READY,CreatedAt:1760391097797084414,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-kcjdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89be2539-f688-4d2f-b897-965ff79df2fb,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-13T21:31:37.437886664Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e8679e9415572c2e6f5bedacae057281f3ed15cca43b21e4988db957b7c4d9d9,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-613120,Uid:5ffd65e64de9ca81f1b2e2c2257415c6,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1760391092968024701,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ffd65e64de9ca81f1b2e2c2257415c6,tier: control-plane,},Annotat
ions:map[string]string{kubernetes.io/config.hash: 5ffd65e64de9ca81f1b2e2c2257415c6,kubernetes.io/config.seen: 2025-10-13T21:31:32.441134376Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e1ecf0fe1c6ec82928d9eae5f2be878362a35243599e34195c81af1d90215b7f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-functional-613120,Uid:522f7dd28b9425c450ca359799bcd7d7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760391092952910106,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522f7dd28b9425c450ca359799bcd7d7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.113:8441,kubernetes.io/config.hash: 522f7dd28b9425c450ca359799bcd7d7,kubernetes.io/config.seen: 2025-10-13T21:31:32.441132398Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{I
d:5de8ccaa96a09de423704317c2aa3d0a4bcb42c975d97a64c39243f47fc2778f,Metadata:&PodSandboxMetadata{Name:etcd-functional-613120,Uid:f324dff40f2879312cdb44fbeb3c82c4,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1760391092934953549,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f324dff40f2879312cdb44fbeb3c82c4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.113:2379,kubernetes.io/config.hash: f324dff40f2879312cdb44fbeb3c82c4,kubernetes.io/config.seen: 2025-10-13T21:31:32.441128741Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:823c5e67c612e79bd1481ee57b3a12b18dc0fde2660f213723bea2dc6a52629e,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-6vfs9,Uid:c6d88d85-406b-4868-99c7-8ab32c2b31f6,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:17603909621
56176547,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-6vfs9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d88d85-406b-4868-99c7-8ab32c2b31f6,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-13T21:28:25.966283446Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:33858ed7ddb02fa41013fcea22864e7ee8564db2287f34817a1d4cab7cd3fdfe,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-613120,Uid:5ffd65e64de9ca81f1b2e2c2257415c6,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1760390962083196581,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ffd65e64de9ca81f1b2e2c2257415c6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5ffd65e64de9ca81f1b2e2c225
7415c6,kubernetes.io/config.seen: 2025-10-13T21:28:20.412871410Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:414c394d4f5087a156cc90d0db1ac29c41a8d2f5517bee2bd7970d5d6e639121,Metadata:&PodSandboxMetadata{Name:etcd-functional-613120,Uid:f324dff40f2879312cdb44fbeb3c82c4,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1760390962021279264,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f324dff40f2879312cdb44fbeb3c82c4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.113:2379,kubernetes.io/config.hash: f324dff40f2879312cdb44fbeb3c82c4,kubernetes.io/config.seen: 2025-10-13T21:28:20.412874932Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1a5f90e559f5a459645f680193f727145f0496c536bb728c5443d0b58a571e83,Metadata:&PodSandboxMetadata{Name:storage-p
rovisioner,Uid:43ed88cc-603c-41f3-a3d9-9ad0eea42a63,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1760390961923208792,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43ed88cc-603c-41f3-a3d9-9ad0eea42a63,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":t
rue,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-10-13T21:28:27.744544011Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c8f88dad0e67d97049df6d67b5732fa21b9356aa2ba9f89ffe054e6a29e605f9,Metadata:&PodSandboxMetadata{Name:kube-proxy-kcjdv,Uid:89be2539-f688-4d2f-b897-965ff79df2fb,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1760390961901765252,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-kcjdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89be2539-f688-4d2f-b897-965ff79df2fb,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-13T21:28:24.990101037Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:08cbfc93ded1a60d8a97fe776f9104b7e6367a21117836e3850a5abb5cd6865d,Me
tadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-613120,Uid:79175203d8cb7407956c87ca1d03921b,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1760390961851247800,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79175203d8cb7407956c87ca1d03921b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 79175203d8cb7407956c87ca1d03921b,kubernetes.io/config.seen: 2025-10-13T21:28:20.412881250Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=5d2dad34-6703-46a0-afeb-1a8327e047be name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 13 21:39:45 functional-613120 crio[5570]: time="2025-10-13 21:39:45.723823047Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a372ccd6-f98b-4b3d-9127-0678147897c4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:39:45 functional-613120 crio[5570]: time="2025-10-13 21:39:45.723914376Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a372ccd6-f98b-4b3d-9127-0678147897c4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:39:45 functional-613120 crio[5570]: time="2025-10-13 21:39:45.724151223Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e07550cbe1c903805740ae03145586d5dbf3c2e2f5d75935a7f0d7dbd8017074,PodSandboxId:713d35e968dc436254f1c9bbf0844707b99573dab6d494bb87bf55faf2fbb173,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760391098418435883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6vfs9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d88d85-406b-4868-99c7-8ab32c2b31f6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f81ad801f39c2510afbfcfc6d287b87778ab5712e388c9261c4bd1ae4b5574f9,PodSandboxId:8057a650674770b70138555ba30614fc2cc2ecc0d734aadd18368a15c4884201,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760391097978806084,Labels:map[str
ing]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcjdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89be2539-f688-4d2f-b897-965ff79df2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66242d56b190812f28a153441bb55d10bd96659f846105a21d5bdce146835c25,PodSandboxId:e1ecf0fe1c6ec82928d9eae5f2be878362a35243599e34195c81af1d90215b7f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760391093158187758,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522f7dd28b9425c450ca359799bcd7d7,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8df3188b3da91c87564c67437665edcf88864daad57fd6b8755dcaa54607cd9,PodSandboxId:e8679e9415572c2e6f5bedacae057281f3ed15cca43b21e4988db957b7c4d9d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379
ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760391093168647063,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ffd65e64de9ca81f1b2e2c2257415c6,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea72e205f15e9bee347c672920aa3c49cc8c2992554819e66d56891dc5671f80,PodSandboxId:5de8ccaa96a09de423704317c2aa3d0a4bcb42c975d97a64c39243f47fc2778f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},U
serSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760391093133706293,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f324dff40f2879312cdb44fbeb3c82c4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cccca560ba245f49986c6f2272ebfe892f25d04e2e81fb10ba7f7102e95d4f15,PodSandboxId:823c5e67c612e79bd1481ee57b3a12b18dc0fde2660f213723bea2dc6a52629e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d92
4aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760390968666210571,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6vfs9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d88d85-406b-4868-99c7-8ab32c2b31f6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernete
s.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5acb4b6ac1a5d2a7093346c8e61251e661880c4d307588f204e6d925ab920c0a,PodSandboxId:1a5f90e559f5a459645f680193f727145f0496c536bb728c5443d0b58a571e83,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1760390962618532579,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43ed88cc-603c-41f3-a3d9-9ad0eea42a63,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203bfd3c7945723b3a1f274ccbd4c49bb08bebdfafdd3cfd3d0a62665ed76dac,PodSandboxId:33858ed7ddb02fa41013fcea22864e7ee8564db2287f34817a1d4cab7cd3fdfe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760390962553977066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ffd65e64de9ca81f1b2e2c2257415c6,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b44a46bca9e30d64bfbfd3eae943fafba4b4c1c92fb594f514a3bd091ffd0e87,PodSandboxId:c8f88dad0e67d97049df6d67b5732fa21b9356aa2ba9f89ffe054e6a29e605f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760390962519561589,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcjdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89be2539-f688-4d2f-b897-965ff79df2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26fd846e4671c2a18eaac3936dd0fe9f0081cd0dcee0c249a8c39512f64eb976,PodSandboxId:414c394d4f5087a156cc90d0db1ac29c41a8d2f5517bee2bd7970d5d6e639121,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760390962485245021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f324dff40f2879312cdb44fbeb3c82c4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"pro
tocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee4b6616f5b7b67eb758e46052afdb66608eb700a52ac609b016e101814cbc5,PodSandboxId:08cbfc93ded1a60d8a97fe776f9104b7e6367a21117836e3850a5abb5cd6865d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760390962229097708,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79175203d8cb7407956c87ca1d03921b,},Annotations:map[string]string{io.kuber
netes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a372ccd6-f98b-4b3d-9127-0678147897c4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:39:45 functional-613120 crio[5570]: time="2025-10-13 21:39:45.750964518Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7a8a6a34-a1e9-4d69-80eb-6287e1cc3dd0 name=/runtime.v1.RuntimeService/Version
	Oct 13 21:39:45 functional-613120 crio[5570]: time="2025-10-13 21:39:45.751033797Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7a8a6a34-a1e9-4d69-80eb-6287e1cc3dd0 name=/runtime.v1.RuntimeService/Version
	Oct 13 21:39:45 functional-613120 crio[5570]: time="2025-10-13 21:39:45.756005974Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d0d8a7c2-b59d-4b2b-854e-d64d9681b466 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 13 21:39:45 functional-613120 crio[5570]: time="2025-10-13 21:39:45.758650385Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760391585758286789,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:158885,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d0d8a7c2-b59d-4b2b-854e-d64d9681b466 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 13 21:39:45 functional-613120 crio[5570]: time="2025-10-13 21:39:45.759972874Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d43858b5-5f4d-46dc-ad73-13d4b5603e96 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:39:45 functional-613120 crio[5570]: time="2025-10-13 21:39:45.760033697Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d43858b5-5f4d-46dc-ad73-13d4b5603e96 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:39:45 functional-613120 crio[5570]: time="2025-10-13 21:39:45.760266394Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e07550cbe1c903805740ae03145586d5dbf3c2e2f5d75935a7f0d7dbd8017074,PodSandboxId:713d35e968dc436254f1c9bbf0844707b99573dab6d494bb87bf55faf2fbb173,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760391098418435883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6vfs9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d88d85-406b-4868-99c7-8ab32c2b31f6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f81ad801f39c2510afbfcfc6d287b87778ab5712e388c9261c4bd1ae4b5574f9,PodSandboxId:8057a650674770b70138555ba30614fc2cc2ecc0d734aadd18368a15c4884201,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760391097978806084,Labels:map[str
ing]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcjdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89be2539-f688-4d2f-b897-965ff79df2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66242d56b190812f28a153441bb55d10bd96659f846105a21d5bdce146835c25,PodSandboxId:e1ecf0fe1c6ec82928d9eae5f2be878362a35243599e34195c81af1d90215b7f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760391093158187758,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522f7dd28b9425c450ca359799bcd7d7,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8df3188b3da91c87564c67437665edcf88864daad57fd6b8755dcaa54607cd9,PodSandboxId:e8679e9415572c2e6f5bedacae057281f3ed15cca43b21e4988db957b7c4d9d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379
ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760391093168647063,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ffd65e64de9ca81f1b2e2c2257415c6,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea72e205f15e9bee347c672920aa3c49cc8c2992554819e66d56891dc5671f80,PodSandboxId:5de8ccaa96a09de423704317c2aa3d0a4bcb42c975d97a64c39243f47fc2778f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},U
serSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760391093133706293,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f324dff40f2879312cdb44fbeb3c82c4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cccca560ba245f49986c6f2272ebfe892f25d04e2e81fb10ba7f7102e95d4f15,PodSandboxId:823c5e67c612e79bd1481ee57b3a12b18dc0fde2660f213723bea2dc6a52629e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d92
4aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760390968666210571,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6vfs9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d88d85-406b-4868-99c7-8ab32c2b31f6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernete
s.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5acb4b6ac1a5d2a7093346c8e61251e661880c4d307588f204e6d925ab920c0a,PodSandboxId:1a5f90e559f5a459645f680193f727145f0496c536bb728c5443d0b58a571e83,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1760390962618532579,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43ed88cc-603c-41f3-a3d9-9ad0eea42a63,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203bfd3c7945723b3a1f274ccbd4c49bb08bebdfafdd3cfd3d0a62665ed76dac,PodSandboxId:33858ed7ddb02fa41013fcea22864e7ee8564db2287f34817a1d4cab7cd3fdfe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760390962553977066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ffd65e64de9ca81f1b2e2c2257415c6,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b44a46bca9e30d64bfbfd3eae943fafba4b4c1c92fb594f514a3bd091ffd0e87,PodSandboxId:c8f88dad0e67d97049df6d67b5732fa21b9356aa2ba9f89ffe054e6a29e605f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760390962519561589,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcjdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89be2539-f688-4d2f-b897-965ff79df2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26fd846e4671c2a18eaac3936dd0fe9f0081cd0dcee0c249a8c39512f64eb976,PodSandboxId:414c394d4f5087a156cc90d0db1ac29c41a8d2f5517bee2bd7970d5d6e639121,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760390962485245021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f324dff40f2879312cdb44fbeb3c82c4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"pro
tocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee4b6616f5b7b67eb758e46052afdb66608eb700a52ac609b016e101814cbc5,PodSandboxId:08cbfc93ded1a60d8a97fe776f9104b7e6367a21117836e3850a5abb5cd6865d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760390962229097708,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79175203d8cb7407956c87ca1d03921b,},Annotations:map[string]string{io.kuber
netes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d43858b5-5f4d-46dc-ad73-13d4b5603e96 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:39:45 functional-613120 crio[5570]: time="2025-10-13 21:39:45.806012537Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1deac69f-83e2-4c39-84ce-e5577c2c8c2a name=/runtime.v1.RuntimeService/Version
	Oct 13 21:39:45 functional-613120 crio[5570]: time="2025-10-13 21:39:45.806177723Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1deac69f-83e2-4c39-84ce-e5577c2c8c2a name=/runtime.v1.RuntimeService/Version
	Oct 13 21:39:45 functional-613120 crio[5570]: time="2025-10-13 21:39:45.807618801Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=502f1b8d-8c5e-4927-9b69-fcf915c8377d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 13 21:39:45 functional-613120 crio[5570]: time="2025-10-13 21:39:45.808697043Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760391585808672043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:158885,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=502f1b8d-8c5e-4927-9b69-fcf915c8377d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 13 21:39:45 functional-613120 crio[5570]: time="2025-10-13 21:39:45.809595363Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b41ea5b3-b1d2-45c5-874d-bde748234173 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:39:45 functional-613120 crio[5570]: time="2025-10-13 21:39:45.809677723Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b41ea5b3-b1d2-45c5-874d-bde748234173 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:39:45 functional-613120 crio[5570]: time="2025-10-13 21:39:45.809909903Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e07550cbe1c903805740ae03145586d5dbf3c2e2f5d75935a7f0d7dbd8017074,PodSandboxId:713d35e968dc436254f1c9bbf0844707b99573dab6d494bb87bf55faf2fbb173,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760391098418435883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6vfs9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d88d85-406b-4868-99c7-8ab32c2b31f6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f81ad801f39c2510afbfcfc6d287b87778ab5712e388c9261c4bd1ae4b5574f9,PodSandboxId:8057a650674770b70138555ba30614fc2cc2ecc0d734aadd18368a15c4884201,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760391097978806084,Labels:map[str
ing]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcjdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89be2539-f688-4d2f-b897-965ff79df2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66242d56b190812f28a153441bb55d10bd96659f846105a21d5bdce146835c25,PodSandboxId:e1ecf0fe1c6ec82928d9eae5f2be878362a35243599e34195c81af1d90215b7f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760391093158187758,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522f7dd28b9425c450ca359799bcd7d7,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8df3188b3da91c87564c67437665edcf88864daad57fd6b8755dcaa54607cd9,PodSandboxId:e8679e9415572c2e6f5bedacae057281f3ed15cca43b21e4988db957b7c4d9d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379
ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760391093168647063,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ffd65e64de9ca81f1b2e2c2257415c6,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea72e205f15e9bee347c672920aa3c49cc8c2992554819e66d56891dc5671f80,PodSandboxId:5de8ccaa96a09de423704317c2aa3d0a4bcb42c975d97a64c39243f47fc2778f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},U
serSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760391093133706293,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f324dff40f2879312cdb44fbeb3c82c4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cccca560ba245f49986c6f2272ebfe892f25d04e2e81fb10ba7f7102e95d4f15,PodSandboxId:823c5e67c612e79bd1481ee57b3a12b18dc0fde2660f213723bea2dc6a52629e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d92
4aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760390968666210571,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6vfs9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d88d85-406b-4868-99c7-8ab32c2b31f6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernete
s.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5acb4b6ac1a5d2a7093346c8e61251e661880c4d307588f204e6d925ab920c0a,PodSandboxId:1a5f90e559f5a459645f680193f727145f0496c536bb728c5443d0b58a571e83,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1760390962618532579,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43ed88cc-603c-41f3-a3d9-9ad0eea42a63,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203bfd3c7945723b3a1f274ccbd4c49bb08bebdfafdd3cfd3d0a62665ed76dac,PodSandboxId:33858ed7ddb02fa41013fcea22864e7ee8564db2287f34817a1d4cab7cd3fdfe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760390962553977066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ffd65e64de9ca81f1b2e2c2257415c6,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b44a46bca9e30d64bfbfd3eae943fafba4b4c1c92fb594f514a3bd091ffd0e87,PodSandboxId:c8f88dad0e67d97049df6d67b5732fa21b9356aa2ba9f89ffe054e6a29e605f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760390962519561589,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcjdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89be2539-f688-4d2f-b897-965ff79df2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26fd846e4671c2a18eaac3936dd0fe9f0081cd0dcee0c249a8c39512f64eb976,PodSandboxId:414c394d4f5087a156cc90d0db1ac29c41a8d2f5517bee2bd7970d5d6e639121,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760390962485245021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f324dff40f2879312cdb44fbeb3c82c4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"pro
tocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee4b6616f5b7b67eb758e46052afdb66608eb700a52ac609b016e101814cbc5,PodSandboxId:08cbfc93ded1a60d8a97fe776f9104b7e6367a21117836e3850a5abb5cd6865d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760390962229097708,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79175203d8cb7407956c87ca1d03921b,},Annotations:map[string]string{io.kuber
netes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b41ea5b3-b1d2-45c5-874d-bde748234173 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:39:45 functional-613120 crio[5570]: time="2025-10-13 21:39:45.847445592Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=482564f8-ce81-4e6e-8c10-aec4296eb852 name=/runtime.v1.RuntimeService/Version
	Oct 13 21:39:45 functional-613120 crio[5570]: time="2025-10-13 21:39:45.847536153Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=482564f8-ce81-4e6e-8c10-aec4296eb852 name=/runtime.v1.RuntimeService/Version
	Oct 13 21:39:45 functional-613120 crio[5570]: time="2025-10-13 21:39:45.849743395Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b5057c2a-75ba-49b6-b75e-0f1f4637fd0c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 13 21:39:45 functional-613120 crio[5570]: time="2025-10-13 21:39:45.850320651Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760391585850292481,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:158885,},InodesUsed:&UInt64Value{Value:77,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b5057c2a-75ba-49b6-b75e-0f1f4637fd0c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 13 21:39:45 functional-613120 crio[5570]: time="2025-10-13 21:39:45.851047039Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7eaee7b2-dcbb-429b-a37b-e9261d673865 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:39:45 functional-613120 crio[5570]: time="2025-10-13 21:39:45.851116699Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7eaee7b2-dcbb-429b-a37b-e9261d673865 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:39:45 functional-613120 crio[5570]: time="2025-10-13 21:39:45.851457518Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e07550cbe1c903805740ae03145586d5dbf3c2e2f5d75935a7f0d7dbd8017074,PodSandboxId:713d35e968dc436254f1c9bbf0844707b99573dab6d494bb87bf55faf2fbb173,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760391098418435883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6vfs9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d88d85-406b-4868-99c7-8ab32c2b31f6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f81ad801f39c2510afbfcfc6d287b87778ab5712e388c9261c4bd1ae4b5574f9,PodSandboxId:8057a650674770b70138555ba30614fc2cc2ecc0d734aadd18368a15c4884201,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760391097978806084,Labels:map[str
ing]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcjdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89be2539-f688-4d2f-b897-965ff79df2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66242d56b190812f28a153441bb55d10bd96659f846105a21d5bdce146835c25,PodSandboxId:e1ecf0fe1c6ec82928d9eae5f2be878362a35243599e34195c81af1d90215b7f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760391093158187758,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522f7dd28b9425c450ca359799bcd7d7,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8df3188b3da91c87564c67437665edcf88864daad57fd6b8755dcaa54607cd9,PodSandboxId:e8679e9415572c2e6f5bedacae057281f3ed15cca43b21e4988db957b7c4d9d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379
ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760391093168647063,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ffd65e64de9ca81f1b2e2c2257415c6,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea72e205f15e9bee347c672920aa3c49cc8c2992554819e66d56891dc5671f80,PodSandboxId:5de8ccaa96a09de423704317c2aa3d0a4bcb42c975d97a64c39243f47fc2778f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},U
serSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760391093133706293,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f324dff40f2879312cdb44fbeb3c82c4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cccca560ba245f49986c6f2272ebfe892f25d04e2e81fb10ba7f7102e95d4f15,PodSandboxId:823c5e67c612e79bd1481ee57b3a12b18dc0fde2660f213723bea2dc6a52629e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d92
4aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760390968666210571,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6vfs9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d88d85-406b-4868-99c7-8ab32c2b31f6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernete
s.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5acb4b6ac1a5d2a7093346c8e61251e661880c4d307588f204e6d925ab920c0a,PodSandboxId:1a5f90e559f5a459645f680193f727145f0496c536bb728c5443d0b58a571e83,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1760390962618532579,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43ed88cc-603c-41f3-a3d9-9ad0eea42a63,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203bfd3c7945723b3a1f274ccbd4c49bb08bebdfafdd3cfd3d0a62665ed76dac,PodSandboxId:33858ed7ddb02fa41013fcea22864e7ee8564db2287f34817a1d4cab7cd3fdfe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760390962553977066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ffd65e64de9ca81f1b2e2c2257415c6,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b44a46bca9e30d64bfbfd3eae943fafba4b4c1c92fb594f514a3bd091ffd0e87,PodSandboxId:c8f88dad0e67d97049df6d67b5732fa21b9356aa2ba9f89ffe054e6a29e605f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760390962519561589,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcjdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89be2539-f688-4d2f-b897-965ff79df2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26fd846e4671c2a18eaac3936dd0fe9f0081cd0dcee0c249a8c39512f64eb976,PodSandboxId:414c394d4f5087a156cc90d0db1ac29c41a8d2f5517bee2bd7970d5d6e639121,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760390962485245021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f324dff40f2879312cdb44fbeb3c82c4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"pro
tocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee4b6616f5b7b67eb758e46052afdb66608eb700a52ac609b016e101814cbc5,PodSandboxId:08cbfc93ded1a60d8a97fe776f9104b7e6367a21117836e3850a5abb5cd6865d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760390962229097708,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79175203d8cb7407956c87ca1d03921b,},Annotations:map[string]string{io.kuber
netes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7eaee7b2-dcbb-429b-a37b-e9261d673865 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e07550cbe1c90       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   8 minutes ago       Running             coredns                   2                   713d35e968dc4       coredns-66bc5c9577-6vfs9
	f81ad801f39c2       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   8 minutes ago       Running             kube-proxy                2                   8057a65067477       kube-proxy-kcjdv
	c8df3188b3da9       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   8 minutes ago       Running             kube-scheduler            2                   e8679e9415572       kube-scheduler-functional-613120
	66242d56b1908       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   8 minutes ago       Running             kube-apiserver            0                   e1ecf0fe1c6ec       kube-apiserver-functional-613120
	ea72e205f15e9       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   8 minutes ago       Running             etcd                      2                   5de8ccaa96a09       etcd-functional-613120
	cccca560ba245       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   10 minutes ago      Exited              coredns                   1                   823c5e67c612e       coredns-66bc5c9577-6vfs9
	5acb4b6ac1a5d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   10 minutes ago      Exited              storage-provisioner       1                   1a5f90e559f5a       storage-provisioner
	203bfd3c79457       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   10 minutes ago      Exited              kube-scheduler            1                   33858ed7ddb02       kube-scheduler-functional-613120
	b44a46bca9e30       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   10 minutes ago      Exited              kube-proxy                1                   c8f88dad0e67d       kube-proxy-kcjdv
	26fd846e4671c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   10 minutes ago      Exited              etcd                      1                   414c394d4f508       etcd-functional-613120
	6ee4b6616f5b7       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   10 minutes ago      Exited              kube-controller-manager   1                   08cbfc93ded1a       kube-controller-manager-functional-613120
	
	
	==> coredns [cccca560ba245f49986c6f2272ebfe892f25d04e2e81fb10ba7f7102e95d4f15] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42456 - 27503 "HINFO IN 7724600574698421821.8526601767399151930. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.049782112s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e07550cbe1c903805740ae03145586d5dbf3c2e2f5d75935a7f0d7dbd8017074] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55800 - 5148 "HINFO IN 1677522805808455981.5721645023815659663. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.036495987s
	
	
	==> describe nodes <==
	Name:               functional-613120
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-613120
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=functional-613120
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T21_28_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 21:28:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-613120
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 21:39:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 21:38:54 +0000   Mon, 13 Oct 2025 21:28:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 21:38:54 +0000   Mon, 13 Oct 2025 21:28:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 21:38:54 +0000   Mon, 13 Oct 2025 21:28:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 21:38:54 +0000   Mon, 13 Oct 2025 21:28:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.113
	  Hostname:    functional-613120
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 16a6c3b9eff6414082874fcb18b5974c
	  System UUID:                16a6c3b9-eff6-4140-8287-4fcb18b5974c
	  Boot ID:                    08d1688d-f04d-4990-8e88-f64344bac422
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-6vfs9                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     11m
	  kube-system                 etcd-functional-613120                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         11m
	  kube-system                 kube-apiserver-functional-613120             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m9s
	  kube-system                 kube-controller-manager-functional-613120    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-kcjdv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-613120             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 11m                    kube-proxy       
	  Normal  Starting                 8m7s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node functional-613120 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node functional-613120 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)      kubelet          Node functional-613120 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    11m                    kubelet          Node functional-613120 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  11m                    kubelet          Node functional-613120 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     11m                    kubelet          Node functional-613120 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 11m                    kubelet          Starting kubelet.
	  Normal  NodeReady                11m                    kubelet          Node functional-613120 status is now: NodeReady
	  Normal  RegisteredNode           11m                    node-controller  Node functional-613120 event: Registered Node functional-613120 in Controller
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node functional-613120 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node functional-613120 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node functional-613120 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                    node-controller  Node functional-613120 event: Registered Node functional-613120 in Controller
	  Normal  NodeHasNoDiskPressure    8m14s (x8 over 8m14s)  kubelet          Node functional-613120 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  8m14s (x8 over 8m14s)  kubelet          Node functional-613120 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     8m14s (x7 over 8m14s)  kubelet          Node functional-613120 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m14s                  kubelet          Starting kubelet.
	
	
	==> dmesg <==
	[Oct13 21:27] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000064] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.008604] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[Oct13 21:28] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000021] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.088305] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.096018] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.147837] kauditd_printk_skb: 171 callbacks suppressed
	[  +5.872269] kauditd_printk_skb: 18 callbacks suppressed
	[ +11.344503] kauditd_printk_skb: 243 callbacks suppressed
	[Oct13 21:29] kauditd_printk_skb: 38 callbacks suppressed
	[  +7.696503] kauditd_printk_skb: 56 callbacks suppressed
	[  +4.771558] kauditd_printk_skb: 290 callbacks suppressed
	[ +14.216137] kauditd_printk_skb: 23 callbacks suppressed
	[ +13.213321] kauditd_printk_skb: 12 callbacks suppressed
	[Oct13 21:31] kauditd_printk_skb: 209 callbacks suppressed
	[  +5.599027] kauditd_printk_skb: 153 callbacks suppressed
	[Oct13 21:32] kauditd_printk_skb: 98 callbacks suppressed
	[Oct13 21:35] kauditd_printk_skb: 16 callbacks suppressed
	[  +3.169795] kauditd_printk_skb: 63 callbacks suppressed
	
	
	==> etcd [26fd846e4671c2a18eaac3936dd0fe9f0081cd0dcee0c249a8c39512f64eb976] <==
	{"level":"warn","ts":"2025-10-13T21:29:26.617665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:29:26.623351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:29:26.631487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:29:26.641884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:29:26.655241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:29:26.658347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:29:26.717551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56654","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-13T21:29:50.857092Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-13T21:29:50.857237Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-613120","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.113:2380"],"advertise-client-urls":["https://192.168.39.113:2379"]}
	{"level":"error","ts":"2025-10-13T21:29:50.857324Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-13T21:29:50.938243Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-13T21:29:50.938335Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T21:29:50.938357Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8069059f79d446ff","current-leader-member-id":"8069059f79d446ff"}
	{"level":"info","ts":"2025-10-13T21:29:50.938508Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-13T21:29:50.938555Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-13T21:29:50.938568Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-13T21:29:50.938627Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-13T21:29:50.938633Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-13T21:29:50.938667Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.113:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-13T21:29:50.938674Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.113:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-13T21:29:50.938679Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.113:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T21:29:50.942024Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.113:2380"}
	{"level":"error","ts":"2025-10-13T21:29:50.942099Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.113:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T21:29:50.942123Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.113:2380"}
	{"level":"info","ts":"2025-10-13T21:29:50.942129Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-613120","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.113:2380"],"advertise-client-urls":["https://192.168.39.113:2379"]}
	
	
	==> etcd [ea72e205f15e9bee347c672920aa3c49cc8c2992554819e66d56891dc5671f80] <==
	{"level":"warn","ts":"2025-10-13T21:31:34.930526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:34.940219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:34.949732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:34.957914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:34.964462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:34.970099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:34.979868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:34.985236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:34.996943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.011573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.017606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.025865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.035148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.043812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.052613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.062810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.076994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.083661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.091500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.098948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.109285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.119002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.129117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.135664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.209541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44616","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:39:46 up 11 min,  0 users,  load average: 0.11, 0.48, 0.44
	Linux functional-613120 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [66242d56b190812f28a153441bb55d10bd96659f846105a21d5bdce146835c25] <==
	I1013 21:31:35.989683       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1013 21:31:35.989738       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1013 21:31:35.989941       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1013 21:31:35.994336       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1013 21:31:35.996139       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1013 21:31:35.996251       1 aggregator.go:171] initial CRD sync complete...
	I1013 21:31:35.996323       1 autoregister_controller.go:144] Starting autoregister controller
	I1013 21:31:35.996330       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 21:31:35.996335       1 cache.go:39] Caches are synced for autoregister controller
	I1013 21:31:35.998223       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1013 21:31:36.001759       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1013 21:31:36.015526       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1013 21:31:36.016269       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1013 21:31:36.790607       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 21:31:37.524043       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 21:31:37.700261       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 21:31:37.777593       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1013 21:31:37.846439       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 21:31:37.863830       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 21:35:46.309636       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.97.121.66"}
	I1013 21:35:50.986537       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.180.30"}
	I1013 21:35:52.706618       1 controller.go:667] quota admission added evaluator for: namespaces
	I1013 21:35:52.877867       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.45.50"}
	I1013 21:35:52.901108       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.97.166"}
	I1013 21:35:53.478919       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.100.177.172"}
	
	
	==> kube-controller-manager [6ee4b6616f5b7b67eb758e46052afdb66608eb700a52ac609b016e101814cbc5] <==
	I1013 21:29:30.142439       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 21:29:30.142680       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1013 21:29:30.142877       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 21:29:30.142688       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1013 21:29:30.142705       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1013 21:29:30.142939       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-613120"
	I1013 21:29:30.143010       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1013 21:29:30.142698       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1013 21:29:30.142711       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1013 21:29:30.143817       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 21:29:30.144966       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1013 21:29:30.146454       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1013 21:29:30.147790       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 21:29:30.154058       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1013 21:29:30.160261       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1013 21:29:30.163983       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1013 21:29:30.167909       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 21:29:30.172471       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 21:29:30.177559       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 21:29:30.178764       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 21:29:30.204417       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 21:29:30.204466       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 21:29:30.204473       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 21:29:30.208646       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 21:29:30.209816       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [b44a46bca9e30d64bfbfd3eae943fafba4b4c1c92fb594f514a3bd091ffd0e87] <==
	I1013 21:29:28.884906       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 21:29:28.988240       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 21:29:28.988284       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.113"]
	E1013 21:29:28.988364       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 21:29:29.063938       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1013 21:29:29.064138       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1013 21:29:29.064231       1 server_linux.go:132] "Using iptables Proxier"
	I1013 21:29:29.075846       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 21:29:29.077723       1 server.go:527] "Version info" version="v1.34.1"
	I1013 21:29:29.077739       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 21:29:29.084538       1 config.go:200] "Starting service config controller"
	I1013 21:29:29.084550       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 21:29:29.084569       1 config.go:106] "Starting endpoint slice config controller"
	I1013 21:29:29.084574       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 21:29:29.084584       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 21:29:29.084587       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 21:29:29.085167       1 config.go:309] "Starting node config controller"
	I1013 21:29:29.085173       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 21:29:29.085178       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 21:29:29.185174       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 21:29:29.185311       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 21:29:29.185334       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [f81ad801f39c2510afbfcfc6d287b87778ab5712e388c9261c4bd1ae4b5574f9] <==
	I1013 21:31:38.397204       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 21:31:38.498253       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 21:31:38.498309       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.113"]
	E1013 21:31:38.498450       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 21:31:38.666515       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1013 21:31:38.666590       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1013 21:31:38.666620       1 server_linux.go:132] "Using iptables Proxier"
	I1013 21:31:38.681690       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 21:31:38.682623       1 server.go:527] "Version info" version="v1.34.1"
	I1013 21:31:38.682729       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 21:31:38.697128       1 config.go:309] "Starting node config controller"
	I1013 21:31:38.697163       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 21:31:38.697169       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 21:31:38.697425       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 21:31:38.697433       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 21:31:38.697497       1 config.go:200] "Starting service config controller"
	I1013 21:31:38.697501       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 21:31:38.697512       1 config.go:106] "Starting endpoint slice config controller"
	I1013 21:31:38.697515       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 21:31:38.798520       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 21:31:38.798568       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 21:31:38.799526       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [203bfd3c7945723b3a1f274ccbd4c49bb08bebdfafdd3cfd3d0a62665ed76dac] <==
	I1013 21:29:25.656553       1 serving.go:386] Generated self-signed cert in-memory
	W1013 21:29:27.329435       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1013 21:29:27.329481       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1013 21:29:27.329491       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1013 21:29:27.329497       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1013 21:29:27.423794       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 21:29:27.423893       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 21:29:27.435704       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 21:29:27.435833       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 21:29:27.435914       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:29:27.435925       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:29:27.537585       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:29:50.858932       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1013 21:29:50.858967       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1013 21:29:50.859022       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1013 21:29:50.859063       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:29:50.859302       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1013 21:29:50.859326       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c8df3188b3da91c87564c67437665edcf88864daad57fd6b8755dcaa54607cd9] <==
	I1013 21:31:34.411646       1 serving.go:386] Generated self-signed cert in-memory
	W1013 21:31:35.836891       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1013 21:31:35.837475       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1013 21:31:35.837536       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1013 21:31:35.837555       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1013 21:31:35.902134       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 21:31:35.902174       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 21:31:35.907484       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:31:35.907564       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:31:35.914613       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 21:31:35.911364       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E1013 21:31:35.924726       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1013 21:31:35.931219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1013 21:31:35.934801       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 21:31:35.937059       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 21:31:35.937223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 21:31:35.937291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 21:31:35.937354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1013 21:31:36.808054       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 21:39:24 functional-613120 kubelet[5912]: E1013 21:39:24.509246    5912 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_storage-provisioner_kube-system_43ed88cc-603c-41f3-a3d9-9ad0eea42a63_2\" already exists" pod="kube-system/storage-provisioner"
	Oct 13 21:39:24 functional-613120 kubelet[5912]: E1013 21:39:24.509263    5912 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_storage-provisioner_kube-system_43ed88cc-603c-41f3-a3d9-9ad0eea42a63_2\" already exists" pod="kube-system/storage-provisioner"
	Oct 13 21:39:24 functional-613120 kubelet[5912]: E1013 21:39:24.509312    5912 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"storage-provisioner_kube-system(43ed88cc-603c-41f3-a3d9-9ad0eea42a63)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"storage-provisioner_kube-system(43ed88cc-603c-41f3-a3d9-9ad0eea42a63)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_storage-provisioner_kube-system_43ed88cc-603c-41f3-a3d9-9ad0eea42a63_2\\\" already exists\"" pod="kube-system/storage-provisioner" podUID="43ed88cc-603c-41f3-a3d9-9ad0eea42a63"
	Oct 13 21:39:26 functional-613120 kubelet[5912]: E1013 21:39:26.510887    5912 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-controller-manager-functional-613120_kube-system_79175203d8cb7407956c87ca1d03921b_2\" already exists"
	Oct 13 21:39:26 functional-613120 kubelet[5912]: E1013 21:39:26.510961    5912 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-controller-manager-functional-613120_kube-system_79175203d8cb7407956c87ca1d03921b_2\" already exists" pod="kube-system/kube-controller-manager-functional-613120"
	Oct 13 21:39:26 functional-613120 kubelet[5912]: E1013 21:39:26.510999    5912 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-controller-manager-functional-613120_kube-system_79175203d8cb7407956c87ca1d03921b_2\" already exists" pod="kube-system/kube-controller-manager-functional-613120"
	Oct 13 21:39:26 functional-613120 kubelet[5912]: E1013 21:39:26.511069    5912 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-functional-613120_kube-system(79175203d8cb7407956c87ca1d03921b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-functional-613120_kube-system(79175203d8cb7407956c87ca1d03921b)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-controller-manager-functional-613120_kube-system_79175203d8cb7407956c87ca1d03921b_2\\\" already exists\"" pod="kube-system/kube-controller-manager-functional-613120" podUID="79175203d8cb7407956c87ca1d03921b"
	Oct 13 21:39:32 functional-613120 kubelet[5912]: E1013 21:39:32.579855    5912 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod43ed88cc-603c-41f3-a3d9-9ad0eea42a63/crio-1a5f90e559f5a459645f680193f727145f0496c536bb728c5443d0b58a571e83: Error finding container 1a5f90e559f5a459645f680193f727145f0496c536bb728c5443d0b58a571e83: Status 404 returned error can't find the container with id 1a5f90e559f5a459645f680193f727145f0496c536bb728c5443d0b58a571e83
	Oct 13 21:39:32 functional-613120 kubelet[5912]: E1013 21:39:32.580576    5912 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod89be2539-f688-4d2f-b897-965ff79df2fb/crio-c8f88dad0e67d97049df6d67b5732fa21b9356aa2ba9f89ffe054e6a29e605f9: Error finding container c8f88dad0e67d97049df6d67b5732fa21b9356aa2ba9f89ffe054e6a29e605f9: Status 404 returned error can't find the container with id c8f88dad0e67d97049df6d67b5732fa21b9356aa2ba9f89ffe054e6a29e605f9
	Oct 13 21:39:32 functional-613120 kubelet[5912]: E1013 21:39:32.580875    5912 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod79175203d8cb7407956c87ca1d03921b/crio-08cbfc93ded1a60d8a97fe776f9104b7e6367a21117836e3850a5abb5cd6865d: Error finding container 08cbfc93ded1a60d8a97fe776f9104b7e6367a21117836e3850a5abb5cd6865d: Status 404 returned error can't find the container with id 08cbfc93ded1a60d8a97fe776f9104b7e6367a21117836e3850a5abb5cd6865d
	Oct 13 21:39:32 functional-613120 kubelet[5912]: E1013 21:39:32.581463    5912 manager.go:1116] Failed to create existing container: /kubepods/burstable/podc6d88d85-406b-4868-99c7-8ab32c2b31f6/crio-823c5e67c612e79bd1481ee57b3a12b18dc0fde2660f213723bea2dc6a52629e: Error finding container 823c5e67c612e79bd1481ee57b3a12b18dc0fde2660f213723bea2dc6a52629e: Status 404 returned error can't find the container with id 823c5e67c612e79bd1481ee57b3a12b18dc0fde2660f213723bea2dc6a52629e
	Oct 13 21:39:32 functional-613120 kubelet[5912]: E1013 21:39:32.582132    5912 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod5ffd65e64de9ca81f1b2e2c2257415c6/crio-33858ed7ddb02fa41013fcea22864e7ee8564db2287f34817a1d4cab7cd3fdfe: Error finding container 33858ed7ddb02fa41013fcea22864e7ee8564db2287f34817a1d4cab7cd3fdfe: Status 404 returned error can't find the container with id 33858ed7ddb02fa41013fcea22864e7ee8564db2287f34817a1d4cab7cd3fdfe
	Oct 13 21:39:32 functional-613120 kubelet[5912]: E1013 21:39:32.582347    5912 manager.go:1116] Failed to create existing container: /kubepods/burstable/podf324dff40f2879312cdb44fbeb3c82c4/crio-414c394d4f5087a156cc90d0db1ac29c41a8d2f5517bee2bd7970d5d6e639121: Error finding container 414c394d4f5087a156cc90d0db1ac29c41a8d2f5517bee2bd7970d5d6e639121: Status 404 returned error can't find the container with id 414c394d4f5087a156cc90d0db1ac29c41a8d2f5517bee2bd7970d5d6e639121
	Oct 13 21:39:32 functional-613120 kubelet[5912]: E1013 21:39:32.708950    5912 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760391572708701361  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:158885}  inodes_used:{value:77}}"
	Oct 13 21:39:32 functional-613120 kubelet[5912]: E1013 21:39:32.708968    5912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760391572708701361  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:158885}  inodes_used:{value:77}}"
	Oct 13 21:39:36 functional-613120 kubelet[5912]: E1013 21:39:36.509982    5912 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_storage-provisioner_kube-system_43ed88cc-603c-41f3-a3d9-9ad0eea42a63_2\" already exists"
	Oct 13 21:39:36 functional-613120 kubelet[5912]: E1013 21:39:36.510033    5912 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_storage-provisioner_kube-system_43ed88cc-603c-41f3-a3d9-9ad0eea42a63_2\" already exists" pod="kube-system/storage-provisioner"
	Oct 13 21:39:36 functional-613120 kubelet[5912]: E1013 21:39:36.510048    5912 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_storage-provisioner_kube-system_43ed88cc-603c-41f3-a3d9-9ad0eea42a63_2\" already exists" pod="kube-system/storage-provisioner"
	Oct 13 21:39:36 functional-613120 kubelet[5912]: E1013 21:39:36.510096    5912 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"storage-provisioner_kube-system(43ed88cc-603c-41f3-a3d9-9ad0eea42a63)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"storage-provisioner_kube-system(43ed88cc-603c-41f3-a3d9-9ad0eea42a63)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_storage-provisioner_kube-system_43ed88cc-603c-41f3-a3d9-9ad0eea42a63_2\\\" already exists\"" pod="kube-system/storage-provisioner" podUID="43ed88cc-603c-41f3-a3d9-9ad0eea42a63"
	Oct 13 21:39:40 functional-613120 kubelet[5912]: E1013 21:39:40.511026    5912 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-controller-manager-functional-613120_kube-system_79175203d8cb7407956c87ca1d03921b_2\" already exists"
	Oct 13 21:39:40 functional-613120 kubelet[5912]: E1013 21:39:40.511071    5912 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-controller-manager-functional-613120_kube-system_79175203d8cb7407956c87ca1d03921b_2\" already exists" pod="kube-system/kube-controller-manager-functional-613120"
	Oct 13 21:39:40 functional-613120 kubelet[5912]: E1013 21:39:40.511086    5912 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-controller-manager-functional-613120_kube-system_79175203d8cb7407956c87ca1d03921b_2\" already exists" pod="kube-system/kube-controller-manager-functional-613120"
	Oct 13 21:39:40 functional-613120 kubelet[5912]: E1013 21:39:40.511128    5912 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-functional-613120_kube-system(79175203d8cb7407956c87ca1d03921b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-functional-613120_kube-system(79175203d8cb7407956c87ca1d03921b)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-controller-manager-functional-613120_kube-system_79175203d8cb7407956c87ca1d03921b_2\\\" already exists\"" pod="kube-system/kube-controller-manager-functional-613120" podUID="79175203d8cb7407956c87ca1d03921b"
	Oct 13 21:39:42 functional-613120 kubelet[5912]: E1013 21:39:42.711315    5912 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760391582710354406  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:158885}  inodes_used:{value:77}}"
	Oct 13 21:39:42 functional-613120 kubelet[5912]: E1013 21:39:42.711340    5912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760391582710354406  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:158885}  inodes_used:{value:77}}"
	
	
	==> storage-provisioner [5acb4b6ac1a5d2a7093346c8e61251e661880c4d307588f204e6d925ab920c0a] <==
	I1013 21:29:28.693909       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1013 21:29:28.735329       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 21:29:28.735455       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1013 21:29:28.740843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:29:32.201230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:29:36.463171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:29:40.062565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:29:43.116892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:29:46.140605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:29:46.145316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 21:29:46.145520       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1013 21:29:46.145674       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-613120_fd093491-f35d-4ff5-b195-f6f669d79236!
	I1013 21:29:46.146038       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d95083f1-44ef-48bb-916b-20078ba22275", APIVersion:"v1", ResourceVersion:"563", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-613120_fd093491-f35d-4ff5-b195-f6f669d79236 became leader
	W1013 21:29:46.154927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:29:46.158535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 21:29:46.246422       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-613120_fd093491-f35d-4ff5-b195-f6f669d79236!
	W1013 21:29:48.161690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:29:48.167620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:29:50.172966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:29:50.179821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-613120 -n functional-613120
helpers_test.go:269: (dbg) Run:  kubectl --context functional-613120 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (234.73s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-613120 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
E1013 21:40:49.942769   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1804: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-613120 -n functional-613120
functional_test.go:1804: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-10-13 21:50:06.448389038 +0000 UTC m=+1939.872686100
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-613120 -n functional-613120
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-613120 logs -n 25: (1.380318259s)
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                          ARGS                                                          │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-613120 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ image          │ functional-613120 image ls                                                                                             │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ image          │ functional-613120 image save --daemon kicbase/echo-server:functional-613120 --alsologtostderr                          │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ ssh            │ functional-613120 ssh sudo cat /etc/test/nested/copy/19947/hosts                                                       │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ ssh            │ functional-613120 ssh sudo cat /etc/ssl/certs/19947.pem                                                                │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ ssh            │ functional-613120 ssh sudo cat /usr/share/ca-certificates/19947.pem                                                    │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ ssh            │ functional-613120 ssh sudo cat /etc/ssl/certs/51391683.0                                                               │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ ssh            │ functional-613120 ssh sudo cat /etc/ssl/certs/199472.pem                                                               │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ ssh            │ functional-613120 ssh sudo cat /usr/share/ca-certificates/199472.pem                                                   │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ ssh            │ functional-613120 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                               │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ image          │ functional-613120 image ls --format short --alsologtostderr                                                            │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ image          │ functional-613120 image ls --format yaml --alsologtostderr                                                             │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ ssh            │ functional-613120 ssh pgrep buildkitd                                                                                  │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │                     │
	│ image          │ functional-613120 image build -t localhost/my-image:functional-613120 testdata/build --alsologtostderr                 │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ image          │ functional-613120 image ls                                                                                             │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ image          │ functional-613120 image ls --format json --alsologtostderr                                                             │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ image          │ functional-613120 image ls --format table --alsologtostderr                                                            │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ update-context │ functional-613120 update-context --alsologtostderr -v=2                                                                │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ update-context │ functional-613120 update-context --alsologtostderr -v=2                                                                │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ update-context │ functional-613120 update-context --alsologtostderr -v=2                                                                │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:40 UTC │ 13 Oct 25 21:40 UTC │
	│ service        │ functional-613120 service list                                                                                         │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:45 UTC │ 13 Oct 25 21:45 UTC │
	│ service        │ functional-613120 service list -o json                                                                                 │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:45 UTC │ 13 Oct 25 21:45 UTC │
	│ service        │ functional-613120 service --namespace=default --https --url hello-node                                                 │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:45 UTC │                     │
	│ service        │ functional-613120 service hello-node --url --format={{.IP}}                                                            │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:45 UTC │                     │
	│ service        │ functional-613120 service hello-node --url                                                                             │ functional-613120 │ jenkins │ v1.37.0 │ 13 Oct 25 21:45 UTC │                     │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 21:35:51
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 21:35:51.510606   28143 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:35:51.510911   28143 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:35:51.510922   28143 out.go:374] Setting ErrFile to fd 2...
	I1013 21:35:51.510927   28143 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:35:51.511106   28143 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-15625/.minikube/bin
	I1013 21:35:51.511546   28143 out.go:368] Setting JSON to false
	I1013 21:35:51.512490   28143 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4699,"bootTime":1760386652,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 21:35:51.512576   28143 start.go:141] virtualization: kvm guest
	I1013 21:35:51.514374   28143 out.go:179] * [functional-613120] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 21:35:51.515688   28143 notify.go:220] Checking for updates...
	I1013 21:35:51.515726   28143 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 21:35:51.517033   28143 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 21:35:51.518361   28143 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-15625/kubeconfig
	I1013 21:35:51.520113   28143 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-15625/.minikube
	I1013 21:35:51.521417   28143 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 21:35:51.522605   28143 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 21:35:51.524328   28143 config.go:182] Loaded profile config "functional-613120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:35:51.524884   28143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:35:51.524955   28143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:35:51.540121   28143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45155
	I1013 21:35:51.540773   28143 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:35:51.541425   28143 main.go:141] libmachine: Using API Version  1
	I1013 21:35:51.541445   28143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:35:51.541933   28143 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:35:51.542144   28143 main.go:141] libmachine: (functional-613120) Calling .DriverName
	I1013 21:35:51.542422   28143 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 21:35:51.542767   28143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:35:51.542806   28143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:35:51.559961   28143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45987
	I1013 21:35:51.560583   28143 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:35:51.561142   28143 main.go:141] libmachine: Using API Version  1
	I1013 21:35:51.561179   28143 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:35:51.561531   28143 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:35:51.561724   28143 main.go:141] libmachine: (functional-613120) Calling .DriverName
	I1013 21:35:51.597447   28143 out.go:179] * Using the kvm2 driver based on existing profile
	I1013 21:35:51.598791   28143 start.go:305] selected driver: kvm2
	I1013 21:35:51.598811   28143 start.go:925] validating driver "kvm2" against &{Name:functional-613120 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-613120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.113 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:35:51.598947   28143 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 21:35:51.600183   28143 cni.go:84] Creating CNI manager for ""
	I1013 21:35:51.600254   28143 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1013 21:35:51.600313   28143 start.go:349] cluster config:
	{Name:functional-613120 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-613120 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.113 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:35:51.602828   28143 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 13 21:50:07 functional-613120 crio[5570]: time="2025-10-13 21:50:07.211858510Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=021df8bb-8dd9-4d58-b248-b9b5c716a81b name=/runtime.v1.RuntimeService/Version
	Oct 13 21:50:07 functional-613120 crio[5570]: time="2025-10-13 21:50:07.212745133Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=95e047e7-2b6a-4198-9228-39da986b9a5f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 13 21:50:07 functional-613120 crio[5570]: time="2025-10-13 21:50:07.213520064Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760392207213495784,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201240,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=95e047e7-2b6a-4198-9228-39da986b9a5f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 13 21:50:07 functional-613120 crio[5570]: time="2025-10-13 21:50:07.214046405Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=31a39b1c-c8fd-4474-9af3-376af7eceec2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:50:07 functional-613120 crio[5570]: time="2025-10-13 21:50:07.214133590Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=31a39b1c-c8fd-4474-9af3-376af7eceec2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:50:07 functional-613120 crio[5570]: time="2025-10-13 21:50:07.214439921Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:79e6030984a0e904e8e9add2aae36132b41cddb0167069e4c35e01ec13f8a0ec,PodSandboxId:a93df8828ff74effd38efe8eeb5659091f56b1a22ee639ad7fc4918ab4900322,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1760391591337665493,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c40a2d9a-c334-4b51-8f13-dd88c18eed33,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e07550cbe1c903805740ae03145586d5dbf3c2e2f5d75935a7f0d7dbd8017074,PodSandboxId:713d35e968dc436254f1c9bbf0844707b99573dab6d494bb87bf55faf2fbb173,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760391098418435883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6vfs9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d88d85-406b-4868-99c7-8ab32c2b31f6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f81ad801f39c2510afbfcfc6d287b87778ab5712e388c9261c4bd1ae4b5574f9,PodSandboxId:8057a650674770b70138555ba30614fc2cc2ecc0d734aadd18368a15c4884201,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760391097978806084,Labels:map[strin
g]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcjdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89be2539-f688-4d2f-b897-965ff79df2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66242d56b190812f28a153441bb55d10bd96659f846105a21d5bdce146835c25,PodSandboxId:e1ecf0fe1c6ec82928d9eae5f2be878362a35243599e34195c81af1d90215b7f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760391093158187758,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522f7dd28b9425c450ca359799bcd7d7,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8df3188b3da91c87564c67437665edcf88864daad57fd6b8755dcaa54607cd9,PodSandboxId:e8679e9415572c2e6f5bedacae057281f3ed15cca43b21e4988db957b7c4d9d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee
9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760391093168647063,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ffd65e64de9ca81f1b2e2c2257415c6,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea72e205f15e9bee347c672920aa3c49cc8c2992554819e66d56891dc5671f80,PodSandboxId:5de8ccaa96a09de423704317c2aa3d0a4bcb42c975d97a64c39243f47fc2778f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760391093133706293,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f324dff40f2879312cdb44fbeb3c82c4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cccca560ba245f49986c6f2272ebfe892f25d04e2e81fb10ba7f7102e95d4f15,PodSandboxId:823c5e67c612e79bd1481ee57b3a12b18dc0fde2660f213723bea2dc6a52629e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924a
a3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760390968666210571,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6vfs9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d88d85-406b-4868-99c7-8ab32c2b31f6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5acb4b6ac1a5d2a7093346c8e61251e661880c4d307588f204e6d925ab920c0a,PodSandboxId:1a5f90e559f5a459645f680193f727145f0496c536bb728c5443d0b58a571e83,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1760390962618532579,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43ed88cc-603c-41f3-a3d9-9ad0eea42a63,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.ter
minationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203bfd3c7945723b3a1f274ccbd4c49bb08bebdfafdd3cfd3d0a62665ed76dac,PodSandboxId:33858ed7ddb02fa41013fcea22864e7ee8564db2287f34817a1d4cab7cd3fdfe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760390962553977066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ffd65e64de9ca81f1b2e2c2257415c6,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b44a46bca9e30d64bfbfd3eae943fafba4b4c1c92fb594f514a3bd091ffd0e87,PodSandboxId:c8f88dad0e67d97049df6d67b5732fa21b9356aa2ba9f89ffe054e6a29e605f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760390962519561589,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcjdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89be2539-f688-4d2f-b897-965ff79df2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26fd846e4671c2a18eaac3936dd0fe9f0081cd0dcee0c249a8c39512f64eb976,PodSandboxId:414c394d4f5087a156cc90d0db1ac29c41a8d2f5517bee2bd7970d5d6e639121,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760390962485245021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f324dff40f2879312cdb44fbeb3c82c4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee4b6616f5b7b67eb758e46052afdb66608eb700a52ac609b016e101814cbc5,PodSandboxId:08cbfc93ded1a60d8a97fe776f9104b7e6367a21117836e3850a5abb5cd6865d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760390962229097708,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79175203d8cb7407956c87ca1d03921b,},Annotations:map[string]string{io.kuberne
tes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=31a39b1c-c8fd-4474-9af3-376af7eceec2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:50:07 functional-613120 crio[5570]: time="2025-10-13 21:50:07.231640985Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=ae5a0152-8b49-48a3-8b3a-a1fb08a648a2 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 13 21:50:07 functional-613120 crio[5570]: time="2025-10-13 21:50:07.232821347Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:a93df8828ff74effd38efe8eeb5659091f56b1a22ee639ad7fc4918ab4900322,Metadata:&PodSandboxMetadata{Name:busybox-mount,Uid:c40a2d9a-c334-4b51-8f13-dd88c18eed33,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1760391588754172387,Labels:map[string]string{integration-test: busybox-mount,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c40a2d9a-c334-4b51-8f13-dd88c18eed33,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-13T21:39:48.436590120Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:713d35e968dc436254f1c9bbf0844707b99573dab6d494bb87bf55faf2fbb173,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-6vfs9,Uid:c6d88d85-406b-4868-99c7-8ab32c2b31f6,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,Creat
edAt:1760391097959965301,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-6vfs9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d88d85-406b-4868-99c7-8ab32c2b31f6,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-13T21:31:37.437890061Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8057a650674770b70138555ba30614fc2cc2ecc0d734aadd18368a15c4884201,Metadata:&PodSandboxMetadata{Name:kube-proxy-kcjdv,Uid:89be2539-f688-4d2f-b897-965ff79df2fb,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1760391097797084414,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-kcjdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89be2539-f688-4d2f-b897-965ff79df2fb,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen:
2025-10-13T21:31:37.437886664Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e8679e9415572c2e6f5bedacae057281f3ed15cca43b21e4988db957b7c4d9d9,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-613120,Uid:5ffd65e64de9ca81f1b2e2c2257415c6,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1760391092968024701,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ffd65e64de9ca81f1b2e2c2257415c6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5ffd65e64de9ca81f1b2e2c2257415c6,kubernetes.io/config.seen: 2025-10-13T21:31:32.441134376Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e1ecf0fe1c6ec82928d9eae5f2be878362a35243599e34195c81af1d90215b7f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-functional-613120,Uid:522f7dd28b9425c450ca359799bcd7d7,Namespace:kube-system,At
tempt:0,},State:SANDBOX_READY,CreatedAt:1760391092952910106,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522f7dd28b9425c450ca359799bcd7d7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.113:8441,kubernetes.io/config.hash: 522f7dd28b9425c450ca359799bcd7d7,kubernetes.io/config.seen: 2025-10-13T21:31:32.441132398Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5de8ccaa96a09de423704317c2aa3d0a4bcb42c975d97a64c39243f47fc2778f,Metadata:&PodSandboxMetadata{Name:etcd-functional-613120,Uid:f324dff40f2879312cdb44fbeb3c82c4,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1760391092934953549,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-613120,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: f324dff40f2879312cdb44fbeb3c82c4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.113:2379,kubernetes.io/config.hash: f324dff40f2879312cdb44fbeb3c82c4,kubernetes.io/config.seen: 2025-10-13T21:31:32.441128741Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:823c5e67c612e79bd1481ee57b3a12b18dc0fde2660f213723bea2dc6a52629e,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-6vfs9,Uid:c6d88d85-406b-4868-99c7-8ab32c2b31f6,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1760390962156176547,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-6vfs9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d88d85-406b-4868-99c7-8ab32c2b31f6,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-13T21:28:25.966283446Z,kubernetes.io/config.source: api,},RuntimeHandl
er:,},&PodSandbox{Id:33858ed7ddb02fa41013fcea22864e7ee8564db2287f34817a1d4cab7cd3fdfe,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-613120,Uid:5ffd65e64de9ca81f1b2e2c2257415c6,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1760390962083196581,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ffd65e64de9ca81f1b2e2c2257415c6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5ffd65e64de9ca81f1b2e2c2257415c6,kubernetes.io/config.seen: 2025-10-13T21:28:20.412871410Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:414c394d4f5087a156cc90d0db1ac29c41a8d2f5517bee2bd7970d5d6e639121,Metadata:&PodSandboxMetadata{Name:etcd-functional-613120,Uid:f324dff40f2879312cdb44fbeb3c82c4,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1760390962021279264,Labels:map[string]strin
g{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f324dff40f2879312cdb44fbeb3c82c4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.113:2379,kubernetes.io/config.hash: f324dff40f2879312cdb44fbeb3c82c4,kubernetes.io/config.seen: 2025-10-13T21:28:20.412874932Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1a5f90e559f5a459645f680193f727145f0496c536bb728c5443d0b58a571e83,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:43ed88cc-603c-41f3-a3d9-9ad0eea42a63,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1760390961923208792,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43ed88cc-603c-41
f3-a3d9-9ad0eea42a63,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-10-13T21:28:27.744544011Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c8f88dad0e67d97049df6d67b5732fa21b9356aa2ba9f89ffe054e6a29e605f9,Metadata:&PodSandboxMetadata{Name:kube-proxy-kcjdv,Uid:89be2539-f688
-4d2f-b897-965ff79df2fb,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1760390961901765252,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-kcjdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89be2539-f688-4d2f-b897-965ff79df2fb,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-13T21:28:24.990101037Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:08cbfc93ded1a60d8a97fe776f9104b7e6367a21117836e3850a5abb5cd6865d,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-613120,Uid:79175203d8cb7407956c87ca1d03921b,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1760390961851247800,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-613120,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 79175203d8cb7407956c87ca1d03921b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 79175203d8cb7407956c87ca1d03921b,kubernetes.io/config.seen: 2025-10-13T21:28:20.412881250Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=ae5a0152-8b49-48a3-8b3a-a1fb08a648a2 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 13 21:50:07 functional-613120 crio[5570]: time="2025-10-13 21:50:07.234082103Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=66f2e2ce-8368-4dd4-b6c2-cc3cadb42a20 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:50:07 functional-613120 crio[5570]: time="2025-10-13 21:50:07.234139327Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=66f2e2ce-8368-4dd4-b6c2-cc3cadb42a20 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:50:07 functional-613120 crio[5570]: time="2025-10-13 21:50:07.234473655Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:79e6030984a0e904e8e9add2aae36132b41cddb0167069e4c35e01ec13f8a0ec,PodSandboxId:a93df8828ff74effd38efe8eeb5659091f56b1a22ee639ad7fc4918ab4900322,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1760391591337665493,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c40a2d9a-c334-4b51-8f13-dd88c18eed33,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e07550cbe1c903805740ae03145586d5dbf3c2e2f5d75935a7f0d7dbd8017074,PodSandboxId:713d35e968dc436254f1c9bbf0844707b99573dab6d494bb87bf55faf2fbb173,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760391098418435883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6vfs9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d88d85-406b-4868-99c7-8ab32c2b31f6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f81ad801f39c2510afbfcfc6d287b87778ab5712e388c9261c4bd1ae4b5574f9,PodSandboxId:8057a650674770b70138555ba30614fc2cc2ecc0d734aadd18368a15c4884201,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760391097978806084,Labels:map[strin
g]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcjdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89be2539-f688-4d2f-b897-965ff79df2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66242d56b190812f28a153441bb55d10bd96659f846105a21d5bdce146835c25,PodSandboxId:e1ecf0fe1c6ec82928d9eae5f2be878362a35243599e34195c81af1d90215b7f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760391093158187758,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522f7dd28b9425c450ca359799bcd7d7,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8df3188b3da91c87564c67437665edcf88864daad57fd6b8755dcaa54607cd9,PodSandboxId:e8679e9415572c2e6f5bedacae057281f3ed15cca43b21e4988db957b7c4d9d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee
9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760391093168647063,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ffd65e64de9ca81f1b2e2c2257415c6,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea72e205f15e9bee347c672920aa3c49cc8c2992554819e66d56891dc5671f80,PodSandboxId:5de8ccaa96a09de423704317c2aa3d0a4bcb42c975d97a64c39243f47fc2778f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760391093133706293,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f324dff40f2879312cdb44fbeb3c82c4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cccca560ba245f49986c6f2272ebfe892f25d04e2e81fb10ba7f7102e95d4f15,PodSandboxId:823c5e67c612e79bd1481ee57b3a12b18dc0fde2660f213723bea2dc6a52629e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924a
a3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760390968666210571,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6vfs9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d88d85-406b-4868-99c7-8ab32c2b31f6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5acb4b6ac1a5d2a7093346c8e61251e661880c4d307588f204e6d925ab920c0a,PodSandboxId:1a5f90e559f5a459645f680193f727145f0496c536bb728c5443d0b58a571e83,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1760390962618532579,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43ed88cc-603c-41f3-a3d9-9ad0eea42a63,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.ter
minationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203bfd3c7945723b3a1f274ccbd4c49bb08bebdfafdd3cfd3d0a62665ed76dac,PodSandboxId:33858ed7ddb02fa41013fcea22864e7ee8564db2287f34817a1d4cab7cd3fdfe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760390962553977066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ffd65e64de9ca81f1b2e2c2257415c6,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b44a46bca9e30d64bfbfd3eae943fafba4b4c1c92fb594f514a3bd091ffd0e87,PodSandboxId:c8f88dad0e67d97049df6d67b5732fa21b9356aa2ba9f89ffe054e6a29e605f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760390962519561589,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcjdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89be2539-f688-4d2f-b897-965ff79df2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26fd846e4671c2a18eaac3936dd0fe9f0081cd0dcee0c249a8c39512f64eb976,PodSandboxId:414c394d4f5087a156cc90d0db1ac29c41a8d2f5517bee2bd7970d5d6e639121,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760390962485245021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f324dff40f2879312cdb44fbeb3c82c4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee4b6616f5b7b67eb758e46052afdb66608eb700a52ac609b016e101814cbc5,PodSandboxId:08cbfc93ded1a60d8a97fe776f9104b7e6367a21117836e3850a5abb5cd6865d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760390962229097708,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79175203d8cb7407956c87ca1d03921b,},Annotations:map[string]string{io.kuberne
tes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=66f2e2ce-8368-4dd4-b6c2-cc3cadb42a20 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:50:07 functional-613120 crio[5570]: time="2025-10-13 21:50:07.253590311Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=19104e43-7dfa-4c12-8db2-3f998a01a579 name=/runtime.v1.RuntimeService/Version
	Oct 13 21:50:07 functional-613120 crio[5570]: time="2025-10-13 21:50:07.253752398Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=19104e43-7dfa-4c12-8db2-3f998a01a579 name=/runtime.v1.RuntimeService/Version
	Oct 13 21:50:07 functional-613120 crio[5570]: time="2025-10-13 21:50:07.255211231Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=94871dee-5a2f-4b0f-be53-e85256a02e0c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 13 21:50:07 functional-613120 crio[5570]: time="2025-10-13 21:50:07.255942623Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760392207255916105,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201240,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=94871dee-5a2f-4b0f-be53-e85256a02e0c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 13 21:50:07 functional-613120 crio[5570]: time="2025-10-13 21:50:07.256908129Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=870297fa-7a72-4395-af29-0d3c50b81069 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:50:07 functional-613120 crio[5570]: time="2025-10-13 21:50:07.257004137Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=870297fa-7a72-4395-af29-0d3c50b81069 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:50:07 functional-613120 crio[5570]: time="2025-10-13 21:50:07.257263459Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:79e6030984a0e904e8e9add2aae36132b41cddb0167069e4c35e01ec13f8a0ec,PodSandboxId:a93df8828ff74effd38efe8eeb5659091f56b1a22ee639ad7fc4918ab4900322,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1760391591337665493,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c40a2d9a-c334-4b51-8f13-dd88c18eed33,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e07550cbe1c903805740ae03145586d5dbf3c2e2f5d75935a7f0d7dbd8017074,PodSandboxId:713d35e968dc436254f1c9bbf0844707b99573dab6d494bb87bf55faf2fbb173,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760391098418435883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6vfs9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d88d85-406b-4868-99c7-8ab32c2b31f6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f81ad801f39c2510afbfcfc6d287b87778ab5712e388c9261c4bd1ae4b5574f9,PodSandboxId:8057a650674770b70138555ba30614fc2cc2ecc0d734aadd18368a15c4884201,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760391097978806084,Labels:map[strin
g]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcjdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89be2539-f688-4d2f-b897-965ff79df2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66242d56b190812f28a153441bb55d10bd96659f846105a21d5bdce146835c25,PodSandboxId:e1ecf0fe1c6ec82928d9eae5f2be878362a35243599e34195c81af1d90215b7f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760391093158187758,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522f7dd28b9425c450ca359799bcd7d7,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8df3188b3da91c87564c67437665edcf88864daad57fd6b8755dcaa54607cd9,PodSandboxId:e8679e9415572c2e6f5bedacae057281f3ed15cca43b21e4988db957b7c4d9d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee
9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760391093168647063,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ffd65e64de9ca81f1b2e2c2257415c6,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea72e205f15e9bee347c672920aa3c49cc8c2992554819e66d56891dc5671f80,PodSandboxId:5de8ccaa96a09de423704317c2aa3d0a4bcb42c975d97a64c39243f47fc2778f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760391093133706293,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f324dff40f2879312cdb44fbeb3c82c4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cccca560ba245f49986c6f2272ebfe892f25d04e2e81fb10ba7f7102e95d4f15,PodSandboxId:823c5e67c612e79bd1481ee57b3a12b18dc0fde2660f213723bea2dc6a52629e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924a
a3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760390968666210571,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6vfs9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d88d85-406b-4868-99c7-8ab32c2b31f6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5acb4b6ac1a5d2a7093346c8e61251e661880c4d307588f204e6d925ab920c0a,PodSandboxId:1a5f90e559f5a459645f680193f727145f0496c536bb728c5443d0b58a571e83,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1760390962618532579,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43ed88cc-603c-41f3-a3d9-9ad0eea42a63,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.ter
minationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203bfd3c7945723b3a1f274ccbd4c49bb08bebdfafdd3cfd3d0a62665ed76dac,PodSandboxId:33858ed7ddb02fa41013fcea22864e7ee8564db2287f34817a1d4cab7cd3fdfe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760390962553977066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ffd65e64de9ca81f1b2e2c2257415c6,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b44a46bca9e30d64bfbfd3eae943fafba4b4c1c92fb594f514a3bd091ffd0e87,PodSandboxId:c8f88dad0e67d97049df6d67b5732fa21b9356aa2ba9f89ffe054e6a29e605f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760390962519561589,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcjdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89be2539-f688-4d2f-b897-965ff79df2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26fd846e4671c2a18eaac3936dd0fe9f0081cd0dcee0c249a8c39512f64eb976,PodSandboxId:414c394d4f5087a156cc90d0db1ac29c41a8d2f5517bee2bd7970d5d6e639121,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760390962485245021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f324dff40f2879312cdb44fbeb3c82c4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee4b6616f5b7b67eb758e46052afdb66608eb700a52ac609b016e101814cbc5,PodSandboxId:08cbfc93ded1a60d8a97fe776f9104b7e6367a21117836e3850a5abb5cd6865d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760390962229097708,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79175203d8cb7407956c87ca1d03921b,},Annotations:map[string]string{io.kuberne
tes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=870297fa-7a72-4395-af29-0d3c50b81069 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:50:07 functional-613120 crio[5570]: time="2025-10-13 21:50:07.302085329Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a8393324-0112-4d86-b2ea-00a031f4414c name=/runtime.v1.RuntimeService/Version
	Oct 13 21:50:07 functional-613120 crio[5570]: time="2025-10-13 21:50:07.302166050Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a8393324-0112-4d86-b2ea-00a031f4414c name=/runtime.v1.RuntimeService/Version
	Oct 13 21:50:07 functional-613120 crio[5570]: time="2025-10-13 21:50:07.303650493Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1307fa14-5b2f-451d-99a4-d40e9828e363 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 13 21:50:07 functional-613120 crio[5570]: time="2025-10-13 21:50:07.304262657Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760392207304240261,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201240,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1307fa14-5b2f-451d-99a4-d40e9828e363 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 13 21:50:07 functional-613120 crio[5570]: time="2025-10-13 21:50:07.305030652Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=44bceedc-d042-4080-9467-2866e623193a name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:50:07 functional-613120 crio[5570]: time="2025-10-13 21:50:07.305161266Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=44bceedc-d042-4080-9467-2866e623193a name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 21:50:07 functional-613120 crio[5570]: time="2025-10-13 21:50:07.305498098Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:79e6030984a0e904e8e9add2aae36132b41cddb0167069e4c35e01ec13f8a0ec,PodSandboxId:a93df8828ff74effd38efe8eeb5659091f56b1a22ee639ad7fc4918ab4900322,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1760391591337665493,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c40a2d9a-c334-4b51-8f13-dd88c18eed33,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e07550cbe1c903805740ae03145586d5dbf3c2e2f5d75935a7f0d7dbd8017074,PodSandboxId:713d35e968dc436254f1c9bbf0844707b99573dab6d494bb87bf55faf2fbb173,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760391098418435883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6vfs9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d88d85-406b-4868-99c7-8ab32c2b31f6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoc
ol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f81ad801f39c2510afbfcfc6d287b87778ab5712e388c9261c4bd1ae4b5574f9,PodSandboxId:8057a650674770b70138555ba30614fc2cc2ecc0d734aadd18368a15c4884201,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760391097978806084,Labels:map[strin
g]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcjdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89be2539-f688-4d2f-b897-965ff79df2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66242d56b190812f28a153441bb55d10bd96659f846105a21d5bdce146835c25,PodSandboxId:e1ecf0fe1c6ec82928d9eae5f2be878362a35243599e34195c81af1d90215b7f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760391093158187758,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 522f7dd28b9425c450ca359799bcd7d7,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8df3188b3da91c87564c67437665edcf88864daad57fd6b8755dcaa54607cd9,PodSandboxId:e8679e9415572c2e6f5bedacae057281f3ed15cca43b21e4988db957b7c4d9d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee
9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760391093168647063,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ffd65e64de9ca81f1b2e2c2257415c6,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea72e205f15e9bee347c672920aa3c49cc8c2992554819e66d56891dc5671f80,PodSandboxId:5de8ccaa96a09de423704317c2aa3d0a4bcb42c975d97a64c39243f47fc2778f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760391093133706293,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f324dff40f2879312cdb44fbeb3c82c4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cccca560ba245f49986c6f2272ebfe892f25d04e2e81fb10ba7f7102e95d4f15,PodSandboxId:823c5e67c612e79bd1481ee57b3a12b18dc0fde2660f213723bea2dc6a52629e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924a
a3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760390968666210571,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6vfs9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d88d85-406b-4868-99c7-8ab32c2b31f6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5acb4b6ac1a5d2a7093346c8e61251e661880c4d307588f204e6d925ab920c0a,PodSandboxId:1a5f90e559f5a459645f680193f727145f0496c536bb728c5443d0b58a571e83,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1760390962618532579,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43ed88cc-603c-41f3-a3d9-9ad0eea42a63,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.ter
minationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:203bfd3c7945723b3a1f274ccbd4c49bb08bebdfafdd3cfd3d0a62665ed76dac,PodSandboxId:33858ed7ddb02fa41013fcea22864e7ee8564db2287f34817a1d4cab7cd3fdfe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760390962553977066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ffd65e64de9ca81f1b2e2c2257415c6,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b44a46bca9e30d64bfbfd3eae943fafba4b4c1c92fb594f514a3bd091ffd0e87,PodSandboxId:c8f88dad0e67d97049df6d67b5732fa21b9356aa2ba9f89ffe054e6a29e605f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760390962519561589,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcjdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89be2539-f688-4d2f-b897-965ff79df2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26fd846e4671c2a18eaac3936dd0fe9f0081cd0dcee0c249a8c39512f64eb976,PodSandboxId:414c394d4f5087a156cc90d0db1ac29c41a8d2f5517bee2bd7970d5d6e639121,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760390962485245021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f324dff40f2879312cdb44fbeb3c82c4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee4b6616f5b7b67eb758e46052afdb66608eb700a52ac609b016e101814cbc5,PodSandboxId:08cbfc93ded1a60d8a97fe776f9104b7e6367a21117836e3850a5abb5cd6865d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760390962229097708,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-613120,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79175203d8cb7407956c87ca1d03921b,},Annotations:map[string]string{io.kuberne
tes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=44bceedc-d042-4080-9467-2866e623193a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	79e6030984a0e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   10 minutes ago      Exited              mount-munger              0                   a93df8828ff74       busybox-mount
	e07550cbe1c90       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      18 minutes ago      Running             coredns                   2                   713d35e968dc4       coredns-66bc5c9577-6vfs9
	f81ad801f39c2       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      18 minutes ago      Running             kube-proxy                2                   8057a65067477       kube-proxy-kcjdv
	c8df3188b3da9       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      18 minutes ago      Running             kube-scheduler            2                   e8679e9415572       kube-scheduler-functional-613120
	66242d56b1908       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      18 minutes ago      Running             kube-apiserver            0                   e1ecf0fe1c6ec       kube-apiserver-functional-613120
	ea72e205f15e9       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      18 minutes ago      Running             etcd                      2                   5de8ccaa96a09       etcd-functional-613120
	cccca560ba245       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      20 minutes ago      Exited              coredns                   1                   823c5e67c612e       coredns-66bc5c9577-6vfs9
	5acb4b6ac1a5d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       1                   1a5f90e559f5a       storage-provisioner
	203bfd3c79457       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      20 minutes ago      Exited              kube-scheduler            1                   33858ed7ddb02       kube-scheduler-functional-613120
	b44a46bca9e30       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      20 minutes ago      Exited              kube-proxy                1                   c8f88dad0e67d       kube-proxy-kcjdv
	26fd846e4671c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      20 minutes ago      Exited              etcd                      1                   414c394d4f508       etcd-functional-613120
	6ee4b6616f5b7       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      20 minutes ago      Exited              kube-controller-manager   1                   08cbfc93ded1a       kube-controller-manager-functional-613120
	
	
	==> coredns [cccca560ba245f49986c6f2272ebfe892f25d04e2e81fb10ba7f7102e95d4f15] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42456 - 27503 "HINFO IN 7724600574698421821.8526601767399151930. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.049782112s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e07550cbe1c903805740ae03145586d5dbf3c2e2f5d75935a7f0d7dbd8017074] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55800 - 5148 "HINFO IN 1677522805808455981.5721645023815659663. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.036495987s
	
	
	==> describe nodes <==
	Name:               functional-613120
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-613120
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=functional-613120
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T21_28_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 21:28:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-613120
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 21:49:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 21:47:25 +0000   Mon, 13 Oct 2025 21:28:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 21:47:25 +0000   Mon, 13 Oct 2025 21:28:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 21:47:25 +0000   Mon, 13 Oct 2025 21:28:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 21:47:25 +0000   Mon, 13 Oct 2025 21:28:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.113
	  Hostname:    functional-613120
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 16a6c3b9eff6414082874fcb18b5974c
	  System UUID:                16a6c3b9-eff6-4140-8287-4fcb18b5974c
	  Boot ID:                    08d1688d-f04d-4990-8e88-f64344bac422
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-6vfs9                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     21m
	  kube-system                 etcd-functional-613120                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         21m
	  kube-system                 kube-apiserver-functional-613120             250m (12%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-functional-613120    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-kcjdv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-functional-613120             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node functional-613120 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node functional-613120 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node functional-613120 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node functional-613120 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node functional-613120 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     21m                kubelet          Node functional-613120 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeReady                21m                kubelet          Node functional-613120 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node functional-613120 event: Registered Node functional-613120 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node functional-613120 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node functional-613120 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node functional-613120 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node functional-613120 event: Registered Node functional-613120 in Controller
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node functional-613120 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node functional-613120 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node functional-613120 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	
	
	==> dmesg <==
	[Oct13 21:27] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000064] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.008604] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[Oct13 21:28] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000021] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.088305] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.096018] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.147837] kauditd_printk_skb: 171 callbacks suppressed
	[  +5.872269] kauditd_printk_skb: 18 callbacks suppressed
	[ +11.344503] kauditd_printk_skb: 243 callbacks suppressed
	[Oct13 21:29] kauditd_printk_skb: 38 callbacks suppressed
	[  +7.696503] kauditd_printk_skb: 56 callbacks suppressed
	[  +4.771558] kauditd_printk_skb: 290 callbacks suppressed
	[ +14.216137] kauditd_printk_skb: 23 callbacks suppressed
	[ +13.213321] kauditd_printk_skb: 12 callbacks suppressed
	[Oct13 21:31] kauditd_printk_skb: 209 callbacks suppressed
	[  +5.599027] kauditd_printk_skb: 153 callbacks suppressed
	[Oct13 21:32] kauditd_printk_skb: 98 callbacks suppressed
	[Oct13 21:35] kauditd_printk_skb: 16 callbacks suppressed
	[  +3.169795] kauditd_printk_skb: 63 callbacks suppressed
	[Oct13 21:39] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.987661] kauditd_printk_skb: 59 callbacks suppressed
	[Oct13 21:40] crun[9003]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	
	
	==> etcd [26fd846e4671c2a18eaac3936dd0fe9f0081cd0dcee0c249a8c39512f64eb976] <==
	{"level":"warn","ts":"2025-10-13T21:29:26.617665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:29:26.623351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:29:26.631487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:29:26.641884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:29:26.655241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:29:26.658347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:29:26.717551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56654","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-13T21:29:50.857092Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-13T21:29:50.857237Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-613120","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.113:2380"],"advertise-client-urls":["https://192.168.39.113:2379"]}
	{"level":"error","ts":"2025-10-13T21:29:50.857324Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-13T21:29:50.938243Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-13T21:29:50.938335Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T21:29:50.938357Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8069059f79d446ff","current-leader-member-id":"8069059f79d446ff"}
	{"level":"info","ts":"2025-10-13T21:29:50.938508Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-13T21:29:50.938555Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-13T21:29:50.938568Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-13T21:29:50.938627Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-13T21:29:50.938633Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-13T21:29:50.938667Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.113:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-13T21:29:50.938674Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.113:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-13T21:29:50.938679Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.113:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T21:29:50.942024Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.113:2380"}
	{"level":"error","ts":"2025-10-13T21:29:50.942099Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.113:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T21:29:50.942123Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.113:2380"}
	{"level":"info","ts":"2025-10-13T21:29:50.942129Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-613120","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.113:2380"],"advertise-client-urls":["https://192.168.39.113:2379"]}
	
	
	==> etcd [ea72e205f15e9bee347c672920aa3c49cc8c2992554819e66d56891dc5671f80] <==
	{"level":"warn","ts":"2025-10-13T21:31:34.979868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:34.985236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:34.996943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.011573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.017606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.025865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.035148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.043812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.052613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.062810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.076994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.083661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.091500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.098948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.109285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.119002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.129117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.135664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:31:35.209541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44616","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-13T21:41:34.423310Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":870}
	{"level":"info","ts":"2025-10-13T21:41:34.433434Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":870,"took":"9.671064ms","hash":3328083090,"current-db-size-bytes":2162688,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2162688,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2025-10-13T21:41:34.433507Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3328083090,"revision":870,"compact-revision":-1}
	{"level":"info","ts":"2025-10-13T21:46:34.431149Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":983}
	{"level":"info","ts":"2025-10-13T21:46:34.434170Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":983,"took":"2.669647ms","hash":3342785133,"current-db-size-bytes":2162688,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":864256,"current-db-size-in-use":"864 kB"}
	{"level":"info","ts":"2025-10-13T21:46:34.434216Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3342785133,"revision":983,"compact-revision":870}
	
	
	==> kernel <==
	 21:50:07 up 22 min,  0 users,  load average: 0.14, 0.31, 0.35
	Linux functional-613120 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [66242d56b190812f28a153441bb55d10bd96659f846105a21d5bdce146835c25] <==
	I1013 21:31:35.989941       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1013 21:31:35.994336       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1013 21:31:35.996139       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1013 21:31:35.996251       1 aggregator.go:171] initial CRD sync complete...
	I1013 21:31:35.996323       1 autoregister_controller.go:144] Starting autoregister controller
	I1013 21:31:35.996330       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 21:31:35.996335       1 cache.go:39] Caches are synced for autoregister controller
	I1013 21:31:35.998223       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1013 21:31:36.001759       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1013 21:31:36.015526       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1013 21:31:36.016269       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1013 21:31:36.790607       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 21:31:37.524043       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 21:31:37.700261       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 21:31:37.777593       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1013 21:31:37.846439       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 21:31:37.863830       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 21:35:46.309636       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.97.121.66"}
	I1013 21:35:50.986537       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.180.30"}
	I1013 21:35:52.706618       1 controller.go:667] quota admission added evaluator for: namespaces
	I1013 21:35:52.877867       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.45.50"}
	I1013 21:35:52.901108       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.97.166"}
	I1013 21:35:53.478919       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.100.177.172"}
	I1013 21:40:06.127459       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.109.19.102"}
	I1013 21:41:35.918476       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [6ee4b6616f5b7b67eb758e46052afdb66608eb700a52ac609b016e101814cbc5] <==
	I1013 21:29:30.142439       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 21:29:30.142680       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1013 21:29:30.142877       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 21:29:30.142688       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1013 21:29:30.142705       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1013 21:29:30.142939       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-613120"
	I1013 21:29:30.143010       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1013 21:29:30.142698       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1013 21:29:30.142711       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1013 21:29:30.143817       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 21:29:30.144966       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1013 21:29:30.146454       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1013 21:29:30.147790       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 21:29:30.154058       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1013 21:29:30.160261       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1013 21:29:30.163983       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1013 21:29:30.167909       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 21:29:30.172471       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 21:29:30.177559       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 21:29:30.178764       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 21:29:30.204417       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 21:29:30.204466       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 21:29:30.204473       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 21:29:30.208646       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 21:29:30.209816       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [b44a46bca9e30d64bfbfd3eae943fafba4b4c1c92fb594f514a3bd091ffd0e87] <==
	I1013 21:29:28.884906       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 21:29:28.988240       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 21:29:28.988284       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.113"]
	E1013 21:29:28.988364       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 21:29:29.063938       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1013 21:29:29.064138       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1013 21:29:29.064231       1 server_linux.go:132] "Using iptables Proxier"
	I1013 21:29:29.075846       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 21:29:29.077723       1 server.go:527] "Version info" version="v1.34.1"
	I1013 21:29:29.077739       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 21:29:29.084538       1 config.go:200] "Starting service config controller"
	I1013 21:29:29.084550       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 21:29:29.084569       1 config.go:106] "Starting endpoint slice config controller"
	I1013 21:29:29.084574       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 21:29:29.084584       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 21:29:29.084587       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 21:29:29.085167       1 config.go:309] "Starting node config controller"
	I1013 21:29:29.085173       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 21:29:29.085178       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 21:29:29.185174       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 21:29:29.185311       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 21:29:29.185334       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [f81ad801f39c2510afbfcfc6d287b87778ab5712e388c9261c4bd1ae4b5574f9] <==
	I1013 21:31:38.397204       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 21:31:38.498253       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 21:31:38.498309       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.113"]
	E1013 21:31:38.498450       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 21:31:38.666515       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1013 21:31:38.666590       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1013 21:31:38.666620       1 server_linux.go:132] "Using iptables Proxier"
	I1013 21:31:38.681690       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 21:31:38.682623       1 server.go:527] "Version info" version="v1.34.1"
	I1013 21:31:38.682729       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 21:31:38.697128       1 config.go:309] "Starting node config controller"
	I1013 21:31:38.697163       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 21:31:38.697169       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 21:31:38.697425       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 21:31:38.697433       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 21:31:38.697497       1 config.go:200] "Starting service config controller"
	I1013 21:31:38.697501       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 21:31:38.697512       1 config.go:106] "Starting endpoint slice config controller"
	I1013 21:31:38.697515       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 21:31:38.798520       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 21:31:38.798568       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 21:31:38.799526       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [203bfd3c7945723b3a1f274ccbd4c49bb08bebdfafdd3cfd3d0a62665ed76dac] <==
	I1013 21:29:25.656553       1 serving.go:386] Generated self-signed cert in-memory
	W1013 21:29:27.329435       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1013 21:29:27.329481       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1013 21:29:27.329491       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1013 21:29:27.329497       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1013 21:29:27.423794       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 21:29:27.423893       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 21:29:27.435704       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 21:29:27.435833       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 21:29:27.435914       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:29:27.435925       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:29:27.537585       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:29:50.858932       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1013 21:29:50.858967       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1013 21:29:50.859022       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1013 21:29:50.859063       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:29:50.859302       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1013 21:29:50.859326       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c8df3188b3da91c87564c67437665edcf88864daad57fd6b8755dcaa54607cd9] <==
	I1013 21:31:34.411646       1 serving.go:386] Generated self-signed cert in-memory
	W1013 21:31:35.836891       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1013 21:31:35.837475       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1013 21:31:35.837536       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1013 21:31:35.837555       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1013 21:31:35.902134       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 21:31:35.902174       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 21:31:35.907484       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:31:35.907564       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:31:35.914613       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 21:31:35.911364       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E1013 21:31:35.924726       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1013 21:31:35.931219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1013 21:31:35.934801       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 21:31:35.937059       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 21:31:35.937223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 21:31:35.937291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 21:31:35.937354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1013 21:31:36.808054       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 21:49:42 functional-613120 kubelet[5912]: E1013 21:49:42.861501    5912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760392182861086472  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Oct 13 21:49:43 functional-613120 kubelet[5912]: E1013 21:49:43.511306    5912 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-controller-manager-functional-613120_kube-system_79175203d8cb7407956c87ca1d03921b_2\" already exists"
	Oct 13 21:49:43 functional-613120 kubelet[5912]: E1013 21:49:43.511608    5912 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-controller-manager-functional-613120_kube-system_79175203d8cb7407956c87ca1d03921b_2\" already exists" pod="kube-system/kube-controller-manager-functional-613120"
	Oct 13 21:49:43 functional-613120 kubelet[5912]: E1013 21:49:43.511678    5912 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-controller-manager-functional-613120_kube-system_79175203d8cb7407956c87ca1d03921b_2\" already exists" pod="kube-system/kube-controller-manager-functional-613120"
	Oct 13 21:49:43 functional-613120 kubelet[5912]: E1013 21:49:43.511752    5912 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-functional-613120_kube-system(79175203d8cb7407956c87ca1d03921b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-functional-613120_kube-system(79175203d8cb7407956c87ca1d03921b)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-controller-manager-functional-613120_kube-system_79175203d8cb7407956c87ca1d03921b_2\\\" already exists\"" pod="kube-system/kube-controller-manager-functional-613120" podUID="79175203d8cb7407956c87ca1d03921b"
	Oct 13 21:49:50 functional-613120 kubelet[5912]: E1013 21:49:50.510126    5912 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_storage-provisioner_kube-system_43ed88cc-603c-41f3-a3d9-9ad0eea42a63_2\" already exists"
	Oct 13 21:49:50 functional-613120 kubelet[5912]: E1013 21:49:50.510162    5912 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_storage-provisioner_kube-system_43ed88cc-603c-41f3-a3d9-9ad0eea42a63_2\" already exists" pod="kube-system/storage-provisioner"
	Oct 13 21:49:50 functional-613120 kubelet[5912]: E1013 21:49:50.510177    5912 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_storage-provisioner_kube-system_43ed88cc-603c-41f3-a3d9-9ad0eea42a63_2\" already exists" pod="kube-system/storage-provisioner"
	Oct 13 21:49:50 functional-613120 kubelet[5912]: E1013 21:49:50.510237    5912 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"storage-provisioner_kube-system(43ed88cc-603c-41f3-a3d9-9ad0eea42a63)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"storage-provisioner_kube-system(43ed88cc-603c-41f3-a3d9-9ad0eea42a63)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_storage-provisioner_kube-system_43ed88cc-603c-41f3-a3d9-9ad0eea42a63_2\\\" already exists\"" pod="kube-system/storage-provisioner" podUID="43ed88cc-603c-41f3-a3d9-9ad0eea42a63"
	Oct 13 21:49:52 functional-613120 kubelet[5912]: E1013 21:49:52.863231    5912 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760392192862761536  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Oct 13 21:49:52 functional-613120 kubelet[5912]: E1013 21:49:52.863278    5912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760392192862761536  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Oct 13 21:49:55 functional-613120 kubelet[5912]: E1013 21:49:55.509199    5912 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-controller-manager-functional-613120_kube-system_79175203d8cb7407956c87ca1d03921b_2\" already exists"
	Oct 13 21:49:55 functional-613120 kubelet[5912]: E1013 21:49:55.509260    5912 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-controller-manager-functional-613120_kube-system_79175203d8cb7407956c87ca1d03921b_2\" already exists" pod="kube-system/kube-controller-manager-functional-613120"
	Oct 13 21:49:55 functional-613120 kubelet[5912]: E1013 21:49:55.509278    5912 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-controller-manager-functional-613120_kube-system_79175203d8cb7407956c87ca1d03921b_2\" already exists" pod="kube-system/kube-controller-manager-functional-613120"
	Oct 13 21:49:55 functional-613120 kubelet[5912]: E1013 21:49:55.509321    5912 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-functional-613120_kube-system(79175203d8cb7407956c87ca1d03921b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-functional-613120_kube-system(79175203d8cb7407956c87ca1d03921b)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-controller-manager-functional-613120_kube-system_79175203d8cb7407956c87ca1d03921b_2\\\" already exists\"" pod="kube-system/kube-controller-manager-functional-613120" podUID="79175203d8cb7407956c87ca1d03921b"
	Oct 13 21:50:02 functional-613120 kubelet[5912]: E1013 21:50:02.869636    5912 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760392202869072245  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Oct 13 21:50:02 functional-613120 kubelet[5912]: E1013 21:50:02.869854    5912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760392202869072245  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201240}  inodes_used:{value:103}}"
	Oct 13 21:50:04 functional-613120 kubelet[5912]: E1013 21:50:04.509952    5912 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_storage-provisioner_kube-system_43ed88cc-603c-41f3-a3d9-9ad0eea42a63_2\" already exists"
	Oct 13 21:50:04 functional-613120 kubelet[5912]: E1013 21:50:04.509993    5912 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_storage-provisioner_kube-system_43ed88cc-603c-41f3-a3d9-9ad0eea42a63_2\" already exists" pod="kube-system/storage-provisioner"
	Oct 13 21:50:04 functional-613120 kubelet[5912]: E1013 21:50:04.510008    5912 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_storage-provisioner_kube-system_43ed88cc-603c-41f3-a3d9-9ad0eea42a63_2\" already exists" pod="kube-system/storage-provisioner"
	Oct 13 21:50:04 functional-613120 kubelet[5912]: E1013 21:50:04.510051    5912 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"storage-provisioner_kube-system(43ed88cc-603c-41f3-a3d9-9ad0eea42a63)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"storage-provisioner_kube-system(43ed88cc-603c-41f3-a3d9-9ad0eea42a63)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_storage-provisioner_kube-system_43ed88cc-603c-41f3-a3d9-9ad0eea42a63_2\\\" already exists\"" pod="kube-system/storage-provisioner" podUID="43ed88cc-603c-41f3-a3d9-9ad0eea42a63"
	Oct 13 21:50:06 functional-613120 kubelet[5912]: E1013 21:50:06.511473    5912 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-controller-manager-functional-613120_kube-system_79175203d8cb7407956c87ca1d03921b_2\" already exists"
	Oct 13 21:50:06 functional-613120 kubelet[5912]: E1013 21:50:06.511543    5912 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-controller-manager-functional-613120_kube-system_79175203d8cb7407956c87ca1d03921b_2\" already exists" pod="kube-system/kube-controller-manager-functional-613120"
	Oct 13 21:50:06 functional-613120 kubelet[5912]: E1013 21:50:06.511561    5912 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-controller-manager-functional-613120_kube-system_79175203d8cb7407956c87ca1d03921b_2\" already exists" pod="kube-system/kube-controller-manager-functional-613120"
	Oct 13 21:50:06 functional-613120 kubelet[5912]: E1013 21:50:06.511625    5912 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-functional-613120_kube-system(79175203d8cb7407956c87ca1d03921b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-functional-613120_kube-system(79175203d8cb7407956c87ca1d03921b)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-controller-manager-functional-613120_kube-system_79175203d8cb7407956c87ca1d03921b_2\\\" already exists\"" pod="kube-system/kube-controller-manager-functional-613120" podUID="79175203d8cb7407956c87ca1d03921b"
	
	
	==> storage-provisioner [5acb4b6ac1a5d2a7093346c8e61251e661880c4d307588f204e6d925ab920c0a] <==
	I1013 21:29:28.693909       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1013 21:29:28.735329       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 21:29:28.735455       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1013 21:29:28.740843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:29:32.201230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:29:36.463171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:29:40.062565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:29:43.116892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:29:46.140605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:29:46.145316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 21:29:46.145520       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1013 21:29:46.145674       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-613120_fd093491-f35d-4ff5-b195-f6f669d79236!
	I1013 21:29:46.146038       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d95083f1-44ef-48bb-916b-20078ba22275", APIVersion:"v1", ResourceVersion:"563", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-613120_fd093491-f35d-4ff5-b195-f6f669d79236 became leader
	W1013 21:29:46.154927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:29:46.158535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 21:29:46.246422       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-613120_fd093491-f35d-4ff5-b195-f6f669d79236!
	W1013 21:29:48.161690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:29:48.167620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:29:50.172966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:29:50.179821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-613120 -n functional-613120
helpers_test.go:269: (dbg) Run:  kubectl --context functional-613120 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-613120 describe pod busybox-mount
helpers_test.go:290: (dbg) kubectl --context functional-613120 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-613120/192.168.39.113
	Start Time:       Mon, 13 Oct 2025 21:39:48 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://79e6030984a0e904e8e9add2aae36132b41cddb0167069e4c35e01ec13f8a0ec
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 13 Oct 2025 21:39:51 +0000
	      Finished:     Mon, 13 Oct 2025 21:39:51 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2zt2x (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-2zt2x:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-613120
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.352s (2.352s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/MySQL FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/MySQL (602.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-613120 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-613120 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-613120 -n functional-613120
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-13 21:45:51.285454228 +0000 UTC m=+1684.709751290
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-613120 service --namespace=default --https --url hello-node: exit status 115 (328.526189ms)

                                                
                                                
-- stdout --
	https://192.168.39.113:30896
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-613120 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-613120 service hello-node --url --format={{.IP}}: exit status 115 (308.001826ms)

                                                
                                                
-- stdout --
	192.168.39.113
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-613120 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-613120 service hello-node --url: exit status 115 (332.572248ms)

                                                
                                                
-- stdout --
	http://192.168.39.113:30896
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-613120 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.113:30896
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestPreload (153.96s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-047519 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-047519 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0: (1m33.078125579s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-047519 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-047519 image pull gcr.io/k8s-minikube/busybox: (2.397228087s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-047519
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-047519: (7.294927475s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-047519 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1013 22:30:49.942796   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:30:51.005443   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-047519 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (48.219288489s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-047519 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-10-13 22:31:21.495810267 +0000 UTC m=+4414.920107341
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-047519 -n test-preload-047519
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-047519 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-047519 logs -n 25: (1.129725654s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                        ARGS                                                                                         │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-320444 ssh -n multinode-320444-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-320444     │ jenkins │ v1.37.0 │ 13 Oct 25 22:18 UTC │ 13 Oct 25 22:18 UTC │
	│ ssh     │ multinode-320444 ssh -n multinode-320444 sudo cat /home/docker/cp-test_multinode-320444-m03_multinode-320444.txt                                                                    │ multinode-320444     │ jenkins │ v1.37.0 │ 13 Oct 25 22:18 UTC │ 13 Oct 25 22:18 UTC │
	│ cp      │ multinode-320444 cp multinode-320444-m03:/home/docker/cp-test.txt multinode-320444-m02:/home/docker/cp-test_multinode-320444-m03_multinode-320444-m02.txt                           │ multinode-320444     │ jenkins │ v1.37.0 │ 13 Oct 25 22:18 UTC │ 13 Oct 25 22:18 UTC │
	│ ssh     │ multinode-320444 ssh -n multinode-320444-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-320444     │ jenkins │ v1.37.0 │ 13 Oct 25 22:18 UTC │ 13 Oct 25 22:18 UTC │
	│ ssh     │ multinode-320444 ssh -n multinode-320444-m02 sudo cat /home/docker/cp-test_multinode-320444-m03_multinode-320444-m02.txt                                                            │ multinode-320444     │ jenkins │ v1.37.0 │ 13 Oct 25 22:18 UTC │ 13 Oct 25 22:18 UTC │
	│ node    │ multinode-320444 node stop m03                                                                                                                                                      │ multinode-320444     │ jenkins │ v1.37.0 │ 13 Oct 25 22:18 UTC │ 13 Oct 25 22:18 UTC │
	│ node    │ multinode-320444 node start m03 -v=5 --alsologtostderr                                                                                                                              │ multinode-320444     │ jenkins │ v1.37.0 │ 13 Oct 25 22:18 UTC │ 13 Oct 25 22:18 UTC │
	│ node    │ list -p multinode-320444                                                                                                                                                            │ multinode-320444     │ jenkins │ v1.37.0 │ 13 Oct 25 22:18 UTC │                     │
	│ stop    │ -p multinode-320444                                                                                                                                                                 │ multinode-320444     │ jenkins │ v1.37.0 │ 13 Oct 25 22:18 UTC │ 13 Oct 25 22:21 UTC │
	│ start   │ -p multinode-320444 --wait=true -v=5 --alsologtostderr                                                                                                                              │ multinode-320444     │ jenkins │ v1.37.0 │ 13 Oct 25 22:21 UTC │ 13 Oct 25 22:23 UTC │
	│ node    │ list -p multinode-320444                                                                                                                                                            │ multinode-320444     │ jenkins │ v1.37.0 │ 13 Oct 25 22:23 UTC │                     │
	│ node    │ multinode-320444 node delete m03                                                                                                                                                    │ multinode-320444     │ jenkins │ v1.37.0 │ 13 Oct 25 22:23 UTC │ 13 Oct 25 22:23 UTC │
	│ stop    │ multinode-320444 stop                                                                                                                                                               │ multinode-320444     │ jenkins │ v1.37.0 │ 13 Oct 25 22:23 UTC │ 13 Oct 25 22:26 UTC │
	│ start   │ -p multinode-320444 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                          │ multinode-320444     │ jenkins │ v1.37.0 │ 13 Oct 25 22:26 UTC │ 13 Oct 25 22:28 UTC │
	│ node    │ list -p multinode-320444                                                                                                                                                            │ multinode-320444     │ jenkins │ v1.37.0 │ 13 Oct 25 22:28 UTC │                     │
	│ start   │ -p multinode-320444-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-320444-m02 │ jenkins │ v1.37.0 │ 13 Oct 25 22:28 UTC │                     │
	│ start   │ -p multinode-320444-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-320444-m03 │ jenkins │ v1.37.0 │ 13 Oct 25 22:28 UTC │ 13 Oct 25 22:28 UTC │
	│ node    │ add -p multinode-320444                                                                                                                                                             │ multinode-320444     │ jenkins │ v1.37.0 │ 13 Oct 25 22:28 UTC │                     │
	│ delete  │ -p multinode-320444-m03                                                                                                                                                             │ multinode-320444-m03 │ jenkins │ v1.37.0 │ 13 Oct 25 22:28 UTC │ 13 Oct 25 22:28 UTC │
	│ delete  │ -p multinode-320444                                                                                                                                                                 │ multinode-320444     │ jenkins │ v1.37.0 │ 13 Oct 25 22:28 UTC │ 13 Oct 25 22:28 UTC │
	│ start   │ -p test-preload-047519 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0 │ test-preload-047519  │ jenkins │ v1.37.0 │ 13 Oct 25 22:28 UTC │ 13 Oct 25 22:30 UTC │
	│ image   │ test-preload-047519 image pull gcr.io/k8s-minikube/busybox                                                                                                                          │ test-preload-047519  │ jenkins │ v1.37.0 │ 13 Oct 25 22:30 UTC │ 13 Oct 25 22:30 UTC │
	│ stop    │ -p test-preload-047519                                                                                                                                                              │ test-preload-047519  │ jenkins │ v1.37.0 │ 13 Oct 25 22:30 UTC │ 13 Oct 25 22:30 UTC │
	│ start   │ -p test-preload-047519 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                         │ test-preload-047519  │ jenkins │ v1.37.0 │ 13 Oct 25 22:30 UTC │ 13 Oct 25 22:31 UTC │
	│ image   │ test-preload-047519 image list                                                                                                                                                      │ test-preload-047519  │ jenkins │ v1.37.0 │ 13 Oct 25 22:31 UTC │ 13 Oct 25 22:31 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 22:30:33
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 22:30:33.095308   55392 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:30:33.095570   55392 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:30:33.095580   55392 out.go:374] Setting ErrFile to fd 2...
	I1013 22:30:33.095587   55392 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:30:33.095805   55392 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-15625/.minikube/bin
	I1013 22:30:33.096277   55392 out.go:368] Setting JSON to false
	I1013 22:30:33.097185   55392 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7981,"bootTime":1760386652,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 22:30:33.097271   55392 start.go:141] virtualization: kvm guest
	I1013 22:30:33.099342   55392 out.go:179] * [test-preload-047519] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 22:30:33.100887   55392 notify.go:220] Checking for updates...
	I1013 22:30:33.100999   55392 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:30:33.102254   55392 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:30:33.103499   55392 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-15625/kubeconfig
	I1013 22:30:33.104699   55392 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-15625/.minikube
	I1013 22:30:33.105882   55392 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 22:30:33.107304   55392 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:30:33.108856   55392 config.go:182] Loaded profile config "test-preload-047519": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1013 22:30:33.109305   55392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 22:30:33.109390   55392 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 22:30:33.122761   55392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34609
	I1013 22:30:33.123298   55392 main.go:141] libmachine: () Calling .GetVersion
	I1013 22:30:33.123831   55392 main.go:141] libmachine: Using API Version  1
	I1013 22:30:33.123853   55392 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 22:30:33.124215   55392 main.go:141] libmachine: () Calling .GetMachineName
	I1013 22:30:33.124379   55392 main.go:141] libmachine: (test-preload-047519) Calling .DriverName
	I1013 22:30:33.126153   55392 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1013 22:30:33.127527   55392 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:30:33.127839   55392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 22:30:33.127875   55392 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 22:30:33.140806   55392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40437
	I1013 22:30:33.141228   55392 main.go:141] libmachine: () Calling .GetVersion
	I1013 22:30:33.141636   55392 main.go:141] libmachine: Using API Version  1
	I1013 22:30:33.141660   55392 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 22:30:33.141970   55392 main.go:141] libmachine: () Calling .GetMachineName
	I1013 22:30:33.142169   55392 main.go:141] libmachine: (test-preload-047519) Calling .DriverName
	I1013 22:30:33.175054   55392 out.go:179] * Using the kvm2 driver based on existing profile
	I1013 22:30:33.176336   55392 start.go:305] selected driver: kvm2
	I1013 22:30:33.176350   55392 start.go:925] validating driver "kvm2" against &{Name:test-preload-047519 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-047519 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:30:33.176441   55392 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:30:33.177110   55392 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:30:33.177232   55392 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-15625/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 22:30:33.190378   55392 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 22:30:33.190401   55392 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-15625/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 22:30:33.203546   55392 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 22:30:33.203914   55392 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:30:33.203967   55392 cni.go:84] Creating CNI manager for ""
	I1013 22:30:33.204009   55392 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1013 22:30:33.204070   55392 start.go:349] cluster config:
	{Name:test-preload-047519 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-047519 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:30:33.204212   55392 iso.go:125] acquiring lock: {Name:mkb744e09089d0ab8a5ae3294003cf719d380bf8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:30:33.205947   55392 out.go:179] * Starting "test-preload-047519" primary control-plane node in "test-preload-047519" cluster
	I1013 22:30:33.207204   55392 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1013 22:30:33.226834   55392 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1013 22:30:33.226857   55392 cache.go:58] Caching tarball of preloaded images
	I1013 22:30:33.227023   55392 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1013 22:30:33.228817   55392 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1013 22:30:33.230101   55392 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1013 22:30:33.255776   55392 preload.go:290] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1013 22:30:33.255823   55392 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21724-15625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1013 22:30:36.392570   55392 cache.go:61] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1013 22:30:36.392731   55392 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/test-preload-047519/config.json ...
	I1013 22:30:36.393029   55392 start.go:360] acquireMachinesLock for test-preload-047519: {Name:mk81e7d45b6c30d879e4077cd05b64f26ced767a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1013 22:30:36.393106   55392 start.go:364] duration metric: took 47.844µs to acquireMachinesLock for "test-preload-047519"
	I1013 22:30:36.393128   55392 start.go:96] Skipping create...Using existing machine configuration
	I1013 22:30:36.393136   55392 fix.go:54] fixHost starting: 
	I1013 22:30:36.393516   55392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 22:30:36.393556   55392 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 22:30:36.406969   55392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40469
	I1013 22:30:36.407479   55392 main.go:141] libmachine: () Calling .GetVersion
	I1013 22:30:36.407926   55392 main.go:141] libmachine: Using API Version  1
	I1013 22:30:36.407952   55392 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 22:30:36.408303   55392 main.go:141] libmachine: () Calling .GetMachineName
	I1013 22:30:36.408496   55392 main.go:141] libmachine: (test-preload-047519) Calling .DriverName
	I1013 22:30:36.408661   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetState
	I1013 22:30:36.410405   55392 fix.go:112] recreateIfNeeded on test-preload-047519: state=Stopped err=<nil>
	I1013 22:30:36.410450   55392 main.go:141] libmachine: (test-preload-047519) Calling .DriverName
	W1013 22:30:36.410617   55392 fix.go:138] unexpected machine state, will restart: <nil>
	I1013 22:30:36.413680   55392 out.go:252] * Restarting existing kvm2 VM for "test-preload-047519" ...
	I1013 22:30:36.413706   55392 main.go:141] libmachine: (test-preload-047519) Calling .Start
	I1013 22:30:36.413884   55392 main.go:141] libmachine: (test-preload-047519) starting domain...
	I1013 22:30:36.413908   55392 main.go:141] libmachine: (test-preload-047519) ensuring networks are active...
	I1013 22:30:36.414820   55392 main.go:141] libmachine: (test-preload-047519) Ensuring network default is active
	I1013 22:30:36.415215   55392 main.go:141] libmachine: (test-preload-047519) Ensuring network mk-test-preload-047519 is active
	I1013 22:30:36.415623   55392 main.go:141] libmachine: (test-preload-047519) getting domain XML...
	I1013 22:30:36.416648   55392 main.go:141] libmachine: (test-preload-047519) DBG | starting domain XML:
	I1013 22:30:36.416663   55392 main.go:141] libmachine: (test-preload-047519) DBG | <domain type='kvm'>
	I1013 22:30:36.416674   55392 main.go:141] libmachine: (test-preload-047519) DBG |   <name>test-preload-047519</name>
	I1013 22:30:36.416684   55392 main.go:141] libmachine: (test-preload-047519) DBG |   <uuid>1015083c-ad6d-4241-8bdc-92f81771c24d</uuid>
	I1013 22:30:36.416709   55392 main.go:141] libmachine: (test-preload-047519) DBG |   <memory unit='KiB'>3145728</memory>
	I1013 22:30:36.416722   55392 main.go:141] libmachine: (test-preload-047519) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1013 22:30:36.416733   55392 main.go:141] libmachine: (test-preload-047519) DBG |   <vcpu placement='static'>2</vcpu>
	I1013 22:30:36.416743   55392 main.go:141] libmachine: (test-preload-047519) DBG |   <os>
	I1013 22:30:36.416761   55392 main.go:141] libmachine: (test-preload-047519) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1013 22:30:36.416776   55392 main.go:141] libmachine: (test-preload-047519) DBG |     <boot dev='cdrom'/>
	I1013 22:30:36.416786   55392 main.go:141] libmachine: (test-preload-047519) DBG |     <boot dev='hd'/>
	I1013 22:30:36.416790   55392 main.go:141] libmachine: (test-preload-047519) DBG |     <bootmenu enable='no'/>
	I1013 22:30:36.416798   55392 main.go:141] libmachine: (test-preload-047519) DBG |   </os>
	I1013 22:30:36.416809   55392 main.go:141] libmachine: (test-preload-047519) DBG |   <features>
	I1013 22:30:36.416818   55392 main.go:141] libmachine: (test-preload-047519) DBG |     <acpi/>
	I1013 22:30:36.416828   55392 main.go:141] libmachine: (test-preload-047519) DBG |     <apic/>
	I1013 22:30:36.416837   55392 main.go:141] libmachine: (test-preload-047519) DBG |     <pae/>
	I1013 22:30:36.416843   55392 main.go:141] libmachine: (test-preload-047519) DBG |   </features>
	I1013 22:30:36.416854   55392 main.go:141] libmachine: (test-preload-047519) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1013 22:30:36.416862   55392 main.go:141] libmachine: (test-preload-047519) DBG |   <clock offset='utc'/>
	I1013 22:30:36.416893   55392 main.go:141] libmachine: (test-preload-047519) DBG |   <on_poweroff>destroy</on_poweroff>
	I1013 22:30:36.416917   55392 main.go:141] libmachine: (test-preload-047519) DBG |   <on_reboot>restart</on_reboot>
	I1013 22:30:36.416927   55392 main.go:141] libmachine: (test-preload-047519) DBG |   <on_crash>destroy</on_crash>
	I1013 22:30:36.416942   55392 main.go:141] libmachine: (test-preload-047519) DBG |   <devices>
	I1013 22:30:36.416955   55392 main.go:141] libmachine: (test-preload-047519) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1013 22:30:36.416966   55392 main.go:141] libmachine: (test-preload-047519) DBG |     <disk type='file' device='cdrom'>
	I1013 22:30:36.416980   55392 main.go:141] libmachine: (test-preload-047519) DBG |       <driver name='qemu' type='raw'/>
	I1013 22:30:36.416995   55392 main.go:141] libmachine: (test-preload-047519) DBG |       <source file='/home/jenkins/minikube-integration/21724-15625/.minikube/machines/test-preload-047519/boot2docker.iso'/>
	I1013 22:30:36.417007   55392 main.go:141] libmachine: (test-preload-047519) DBG |       <target dev='hdc' bus='scsi'/>
	I1013 22:30:36.417017   55392 main.go:141] libmachine: (test-preload-047519) DBG |       <readonly/>
	I1013 22:30:36.417086   55392 main.go:141] libmachine: (test-preload-047519) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1013 22:30:36.417114   55392 main.go:141] libmachine: (test-preload-047519) DBG |     </disk>
	I1013 22:30:36.417126   55392 main.go:141] libmachine: (test-preload-047519) DBG |     <disk type='file' device='disk'>
	I1013 22:30:36.417142   55392 main.go:141] libmachine: (test-preload-047519) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1013 22:30:36.417180   55392 main.go:141] libmachine: (test-preload-047519) DBG |       <source file='/home/jenkins/minikube-integration/21724-15625/.minikube/machines/test-preload-047519/test-preload-047519.rawdisk'/>
	I1013 22:30:36.417197   55392 main.go:141] libmachine: (test-preload-047519) DBG |       <target dev='hda' bus='virtio'/>
	I1013 22:30:36.417209   55392 main.go:141] libmachine: (test-preload-047519) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1013 22:30:36.417219   55392 main.go:141] libmachine: (test-preload-047519) DBG |     </disk>
	I1013 22:30:36.417229   55392 main.go:141] libmachine: (test-preload-047519) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1013 22:30:36.417246   55392 main.go:141] libmachine: (test-preload-047519) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1013 22:30:36.417259   55392 main.go:141] libmachine: (test-preload-047519) DBG |     </controller>
	I1013 22:30:36.417269   55392 main.go:141] libmachine: (test-preload-047519) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1013 22:30:36.417281   55392 main.go:141] libmachine: (test-preload-047519) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1013 22:30:36.417293   55392 main.go:141] libmachine: (test-preload-047519) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1013 22:30:36.417304   55392 main.go:141] libmachine: (test-preload-047519) DBG |     </controller>
	I1013 22:30:36.417313   55392 main.go:141] libmachine: (test-preload-047519) DBG |     <interface type='network'>
	I1013 22:30:36.417323   55392 main.go:141] libmachine: (test-preload-047519) DBG |       <mac address='52:54:00:bd:a5:c9'/>
	I1013 22:30:36.417330   55392 main.go:141] libmachine: (test-preload-047519) DBG |       <source network='mk-test-preload-047519'/>
	I1013 22:30:36.417345   55392 main.go:141] libmachine: (test-preload-047519) DBG |       <model type='virtio'/>
	I1013 22:30:36.417360   55392 main.go:141] libmachine: (test-preload-047519) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1013 22:30:36.417388   55392 main.go:141] libmachine: (test-preload-047519) DBG |     </interface>
	I1013 22:30:36.417410   55392 main.go:141] libmachine: (test-preload-047519) DBG |     <interface type='network'>
	I1013 22:30:36.417419   55392 main.go:141] libmachine: (test-preload-047519) DBG |       <mac address='52:54:00:05:4a:00'/>
	I1013 22:30:36.417426   55392 main.go:141] libmachine: (test-preload-047519) DBG |       <source network='default'/>
	I1013 22:30:36.417440   55392 main.go:141] libmachine: (test-preload-047519) DBG |       <model type='virtio'/>
	I1013 22:30:36.417449   55392 main.go:141] libmachine: (test-preload-047519) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1013 22:30:36.417458   55392 main.go:141] libmachine: (test-preload-047519) DBG |     </interface>
	I1013 22:30:36.417464   55392 main.go:141] libmachine: (test-preload-047519) DBG |     <serial type='pty'>
	I1013 22:30:36.417475   55392 main.go:141] libmachine: (test-preload-047519) DBG |       <target type='isa-serial' port='0'>
	I1013 22:30:36.417482   55392 main.go:141] libmachine: (test-preload-047519) DBG |         <model name='isa-serial'/>
	I1013 22:30:36.417503   55392 main.go:141] libmachine: (test-preload-047519) DBG |       </target>
	I1013 22:30:36.417521   55392 main.go:141] libmachine: (test-preload-047519) DBG |     </serial>
	I1013 22:30:36.417533   55392 main.go:141] libmachine: (test-preload-047519) DBG |     <console type='pty'>
	I1013 22:30:36.417552   55392 main.go:141] libmachine: (test-preload-047519) DBG |       <target type='serial' port='0'/>
	I1013 22:30:36.417563   55392 main.go:141] libmachine: (test-preload-047519) DBG |     </console>
	I1013 22:30:36.417572   55392 main.go:141] libmachine: (test-preload-047519) DBG |     <input type='mouse' bus='ps2'/>
	I1013 22:30:36.417585   55392 main.go:141] libmachine: (test-preload-047519) DBG |     <input type='keyboard' bus='ps2'/>
	I1013 22:30:36.417600   55392 main.go:141] libmachine: (test-preload-047519) DBG |     <audio id='1' type='none'/>
	I1013 22:30:36.417612   55392 main.go:141] libmachine: (test-preload-047519) DBG |     <memballoon model='virtio'>
	I1013 22:30:36.417625   55392 main.go:141] libmachine: (test-preload-047519) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1013 22:30:36.417634   55392 main.go:141] libmachine: (test-preload-047519) DBG |     </memballoon>
	I1013 22:30:36.417639   55392 main.go:141] libmachine: (test-preload-047519) DBG |     <rng model='virtio'>
	I1013 22:30:36.417649   55392 main.go:141] libmachine: (test-preload-047519) DBG |       <backend model='random'>/dev/random</backend>
	I1013 22:30:36.417663   55392 main.go:141] libmachine: (test-preload-047519) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1013 22:30:36.417674   55392 main.go:141] libmachine: (test-preload-047519) DBG |     </rng>
	I1013 22:30:36.417686   55392 main.go:141] libmachine: (test-preload-047519) DBG |   </devices>
	I1013 22:30:36.417695   55392 main.go:141] libmachine: (test-preload-047519) DBG | </domain>
	I1013 22:30:36.417705   55392 main.go:141] libmachine: (test-preload-047519) DBG | 
	I1013 22:30:37.670084   55392 main.go:141] libmachine: (test-preload-047519) waiting for domain to start...
	I1013 22:30:37.671741   55392 main.go:141] libmachine: (test-preload-047519) domain is now running
	I1013 22:30:37.671764   55392 main.go:141] libmachine: (test-preload-047519) waiting for IP...
	I1013 22:30:37.672606   55392 main.go:141] libmachine: (test-preload-047519) DBG | domain test-preload-047519 has defined MAC address 52:54:00:bd:a5:c9 in network mk-test-preload-047519
	I1013 22:30:37.673106   55392 main.go:141] libmachine: (test-preload-047519) DBG | domain test-preload-047519 has current primary IP address 192.168.39.205 and MAC address 52:54:00:bd:a5:c9 in network mk-test-preload-047519
	I1013 22:30:37.673124   55392 main.go:141] libmachine: (test-preload-047519) found domain IP: 192.168.39.205
	I1013 22:30:37.673138   55392 main.go:141] libmachine: (test-preload-047519) reserving static IP address...
	I1013 22:30:37.673633   55392 main.go:141] libmachine: (test-preload-047519) DBG | found host DHCP lease matching {name: "test-preload-047519", mac: "52:54:00:bd:a5:c9", ip: "192.168.39.205"} in network mk-test-preload-047519: {Iface:virbr1 ExpiryTime:2025-10-13 23:29:06 +0000 UTC Type:0 Mac:52:54:00:bd:a5:c9 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:test-preload-047519 Clientid:01:52:54:00:bd:a5:c9}
	I1013 22:30:37.673660   55392 main.go:141] libmachine: (test-preload-047519) DBG | skip adding static IP to network mk-test-preload-047519 - found existing host DHCP lease matching {name: "test-preload-047519", mac: "52:54:00:bd:a5:c9", ip: "192.168.39.205"}
	I1013 22:30:37.673676   55392 main.go:141] libmachine: (test-preload-047519) reserved static IP address 192.168.39.205 for domain test-preload-047519
	I1013 22:30:37.673694   55392 main.go:141] libmachine: (test-preload-047519) waiting for SSH...
	I1013 22:30:37.673710   55392 main.go:141] libmachine: (test-preload-047519) DBG | Getting to WaitForSSH function...
	I1013 22:30:37.675941   55392 main.go:141] libmachine: (test-preload-047519) DBG | domain test-preload-047519 has defined MAC address 52:54:00:bd:a5:c9 in network mk-test-preload-047519
	I1013 22:30:37.676305   55392 main.go:141] libmachine: (test-preload-047519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a5:c9", ip: ""} in network mk-test-preload-047519: {Iface:virbr1 ExpiryTime:2025-10-13 23:29:06 +0000 UTC Type:0 Mac:52:54:00:bd:a5:c9 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:test-preload-047519 Clientid:01:52:54:00:bd:a5:c9}
	I1013 22:30:37.676334   55392 main.go:141] libmachine: (test-preload-047519) DBG | domain test-preload-047519 has defined IP address 192.168.39.205 and MAC address 52:54:00:bd:a5:c9 in network mk-test-preload-047519
	I1013 22:30:37.676463   55392 main.go:141] libmachine: (test-preload-047519) DBG | Using SSH client type: external
	I1013 22:30:37.676497   55392 main.go:141] libmachine: (test-preload-047519) DBG | Using SSH private key: /home/jenkins/minikube-integration/21724-15625/.minikube/machines/test-preload-047519/id_rsa (-rw-------)
	I1013 22:30:37.676532   55392 main.go:141] libmachine: (test-preload-047519) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.205 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21724-15625/.minikube/machines/test-preload-047519/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1013 22:30:37.676545   55392 main.go:141] libmachine: (test-preload-047519) DBG | About to run SSH command:
	I1013 22:30:37.676560   55392 main.go:141] libmachine: (test-preload-047519) DBG | exit 0
	I1013 22:30:48.938791   55392 main.go:141] libmachine: (test-preload-047519) DBG | SSH cmd err, output: exit status 255: 
	I1013 22:30:48.938823   55392 main.go:141] libmachine: (test-preload-047519) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1013 22:30:48.938831   55392 main.go:141] libmachine: (test-preload-047519) DBG | command : exit 0
	I1013 22:30:48.938836   55392 main.go:141] libmachine: (test-preload-047519) DBG | err     : exit status 255
	I1013 22:30:48.938847   55392 main.go:141] libmachine: (test-preload-047519) DBG | output  : 
	I1013 22:30:51.939349   55392 main.go:141] libmachine: (test-preload-047519) DBG | Getting to WaitForSSH function...
	I1013 22:30:51.942138   55392 main.go:141] libmachine: (test-preload-047519) DBG | domain test-preload-047519 has defined MAC address 52:54:00:bd:a5:c9 in network mk-test-preload-047519
	I1013 22:30:51.942525   55392 main.go:141] libmachine: (test-preload-047519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a5:c9", ip: ""} in network mk-test-preload-047519: {Iface:virbr1 ExpiryTime:2025-10-13 23:30:48 +0000 UTC Type:0 Mac:52:54:00:bd:a5:c9 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:test-preload-047519 Clientid:01:52:54:00:bd:a5:c9}
	I1013 22:30:51.942548   55392 main.go:141] libmachine: (test-preload-047519) DBG | domain test-preload-047519 has defined IP address 192.168.39.205 and MAC address 52:54:00:bd:a5:c9 in network mk-test-preload-047519
	I1013 22:30:51.942766   55392 main.go:141] libmachine: (test-preload-047519) DBG | Using SSH client type: external
	I1013 22:30:51.942789   55392 main.go:141] libmachine: (test-preload-047519) DBG | Using SSH private key: /home/jenkins/minikube-integration/21724-15625/.minikube/machines/test-preload-047519/id_rsa (-rw-------)
	I1013 22:30:51.942814   55392 main.go:141] libmachine: (test-preload-047519) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.205 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21724-15625/.minikube/machines/test-preload-047519/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1013 22:30:51.942828   55392 main.go:141] libmachine: (test-preload-047519) DBG | About to run SSH command:
	I1013 22:30:51.942866   55392 main.go:141] libmachine: (test-preload-047519) DBG | exit 0
	I1013 22:30:52.073225   55392 main.go:141] libmachine: (test-preload-047519) DBG | SSH cmd err, output: <nil>: 
	I1013 22:30:52.073563   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetConfigRaw
	I1013 22:30:52.074250   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetIP
	I1013 22:30:52.076778   55392 main.go:141] libmachine: (test-preload-047519) DBG | domain test-preload-047519 has defined MAC address 52:54:00:bd:a5:c9 in network mk-test-preload-047519
	I1013 22:30:52.077187   55392 main.go:141] libmachine: (test-preload-047519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a5:c9", ip: ""} in network mk-test-preload-047519: {Iface:virbr1 ExpiryTime:2025-10-13 23:30:48 +0000 UTC Type:0 Mac:52:54:00:bd:a5:c9 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:test-preload-047519 Clientid:01:52:54:00:bd:a5:c9}
	I1013 22:30:52.077240   55392 main.go:141] libmachine: (test-preload-047519) DBG | domain test-preload-047519 has defined IP address 192.168.39.205 and MAC address 52:54:00:bd:a5:c9 in network mk-test-preload-047519
	I1013 22:30:52.077475   55392 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/test-preload-047519/config.json ...
	I1013 22:30:52.077705   55392 machine.go:93] provisionDockerMachine start ...
	I1013 22:30:52.077724   55392 main.go:141] libmachine: (test-preload-047519) Calling .DriverName
	I1013 22:30:52.077931   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHHostname
	I1013 22:30:52.080277   55392 main.go:141] libmachine: (test-preload-047519) DBG | domain test-preload-047519 has defined MAC address 52:54:00:bd:a5:c9 in network mk-test-preload-047519
	I1013 22:30:52.080642   55392 main.go:141] libmachine: (test-preload-047519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a5:c9", ip: ""} in network mk-test-preload-047519: {Iface:virbr1 ExpiryTime:2025-10-13 23:30:48 +0000 UTC Type:0 Mac:52:54:00:bd:a5:c9 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:test-preload-047519 Clientid:01:52:54:00:bd:a5:c9}
	I1013 22:30:52.080657   55392 main.go:141] libmachine: (test-preload-047519) DBG | domain test-preload-047519 has defined IP address 192.168.39.205 and MAC address 52:54:00:bd:a5:c9 in network mk-test-preload-047519
	I1013 22:30:52.080804   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHPort
	I1013 22:30:52.080979   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHKeyPath
	I1013 22:30:52.081139   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHKeyPath
	I1013 22:30:52.081309   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHUsername
	I1013 22:30:52.081464   55392 main.go:141] libmachine: Using SSH client type: native
	I1013 22:30:52.081744   55392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1013 22:30:52.081760   55392 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 22:30:52.187048   55392 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1013 22:30:52.187077   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetMachineName
	I1013 22:30:52.187355   55392 buildroot.go:166] provisioning hostname "test-preload-047519"
	I1013 22:30:52.187388   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetMachineName
	I1013 22:30:52.187567   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHHostname
	I1013 22:30:52.190702   55392 main.go:141] libmachine: (test-preload-047519) DBG | domain test-preload-047519 has defined MAC address 52:54:00:bd:a5:c9 in network mk-test-preload-047519
	I1013 22:30:52.191092   55392 main.go:141] libmachine: (test-preload-047519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a5:c9", ip: ""} in network mk-test-preload-047519: {Iface:virbr1 ExpiryTime:2025-10-13 23:30:48 +0000 UTC Type:0 Mac:52:54:00:bd:a5:c9 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:test-preload-047519 Clientid:01:52:54:00:bd:a5:c9}
	I1013 22:30:52.191116   55392 main.go:141] libmachine: (test-preload-047519) DBG | domain test-preload-047519 has defined IP address 192.168.39.205 and MAC address 52:54:00:bd:a5:c9 in network mk-test-preload-047519
	I1013 22:30:52.191377   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHPort
	I1013 22:30:52.191569   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHKeyPath
	I1013 22:30:52.191715   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHKeyPath
	I1013 22:30:52.191797   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHUsername
	I1013 22:30:52.191906   55392 main.go:141] libmachine: Using SSH client type: native
	I1013 22:30:52.192186   55392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1013 22:30:52.192205   55392 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-047519 && echo "test-preload-047519" | sudo tee /etc/hostname
	I1013 22:30:52.315428   55392 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-047519
	
	I1013 22:30:52.315460   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHHostname
	I1013 22:30:52.318499   55392 main.go:141] libmachine: (test-preload-047519) DBG | domain test-preload-047519 has defined MAC address 52:54:00:bd:a5:c9 in network mk-test-preload-047519
	I1013 22:30:52.318903   55392 main.go:141] libmachine: (test-preload-047519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a5:c9", ip: ""} in network mk-test-preload-047519: {Iface:virbr1 ExpiryTime:2025-10-13 23:30:48 +0000 UTC Type:0 Mac:52:54:00:bd:a5:c9 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:test-preload-047519 Clientid:01:52:54:00:bd:a5:c9}
	I1013 22:30:52.318927   55392 main.go:141] libmachine: (test-preload-047519) DBG | domain test-preload-047519 has defined IP address 192.168.39.205 and MAC address 52:54:00:bd:a5:c9 in network mk-test-preload-047519
	I1013 22:30:52.319222   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHPort
	I1013 22:30:52.319440   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHKeyPath
	I1013 22:30:52.319590   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHKeyPath
	I1013 22:30:52.319731   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHUsername
	I1013 22:30:52.319909   55392 main.go:141] libmachine: Using SSH client type: native
	I1013 22:30:52.320101   55392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1013 22:30:52.320118   55392 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-047519' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-047519/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-047519' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 22:30:52.436339   55392 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:30:52.436370   55392 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21724-15625/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-15625/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-15625/.minikube}
	I1013 22:30:52.436406   55392 buildroot.go:174] setting up certificates
	I1013 22:30:52.436414   55392 provision.go:84] configureAuth start
	I1013 22:30:52.436428   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetMachineName
	I1013 22:30:52.436747   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetIP
	I1013 22:30:52.439832   55392 main.go:141] libmachine: (test-preload-047519) DBG | domain test-preload-047519 has defined MAC address 52:54:00:bd:a5:c9 in network mk-test-preload-047519
	I1013 22:30:52.440357   55392 main.go:141] libmachine: (test-preload-047519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a5:c9", ip: ""} in network mk-test-preload-047519: {Iface:virbr1 ExpiryTime:2025-10-13 23:30:48 +0000 UTC Type:0 Mac:52:54:00:bd:a5:c9 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:test-preload-047519 Clientid:01:52:54:00:bd:a5:c9}
	I1013 22:30:52.440380   55392 main.go:141] libmachine: (test-preload-047519) DBG | domain test-preload-047519 has defined IP address 192.168.39.205 and MAC address 52:54:00:bd:a5:c9 in network mk-test-preload-047519
	I1013 22:30:52.440589   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHHostname
	I1013 22:30:52.443266   55392 main.go:141] libmachine: (test-preload-047519) DBG | domain test-preload-047519 has defined MAC address 52:54:00:bd:a5:c9 in network mk-test-preload-047519
	I1013 22:30:52.443617   55392 main.go:141] libmachine: (test-preload-047519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a5:c9", ip: ""} in network mk-test-preload-047519: {Iface:virbr1 ExpiryTime:2025-10-13 23:30:48 +0000 UTC Type:0 Mac:52:54:00:bd:a5:c9 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:test-preload-047519 Clientid:01:52:54:00:bd:a5:c9}
	I1013 22:30:52.443648   55392 main.go:141] libmachine: (test-preload-047519) DBG | domain test-preload-047519 has defined IP address 192.168.39.205 and MAC address 52:54:00:bd:a5:c9 in network mk-test-preload-047519
	I1013 22:30:52.443813   55392 provision.go:143] copyHostCerts
	I1013 22:30:52.443863   55392 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-15625/.minikube/ca.pem, removing ...
	I1013 22:30:52.443876   55392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-15625/.minikube/ca.pem
	I1013 22:30:52.443980   55392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-15625/.minikube/ca.pem (1078 bytes)
	I1013 22:30:52.444105   55392 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-15625/.minikube/cert.pem, removing ...
	I1013 22:30:52.444116   55392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-15625/.minikube/cert.pem
	I1013 22:30:52.444153   55392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-15625/.minikube/cert.pem (1123 bytes)
	I1013 22:30:52.444256   55392 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-15625/.minikube/key.pem, removing ...
	I1013 22:30:52.444267   55392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-15625/.minikube/key.pem
	I1013 22:30:52.444304   55392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-15625/.minikube/key.pem (1675 bytes)
	I1013 22:30:52.444377   55392 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-15625/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca-key.pem org=jenkins.test-preload-047519 san=[127.0.0.1 192.168.39.205 localhost minikube test-preload-047519]
	I1013 22:30:52.674291   55392 provision.go:177] copyRemoteCerts
	I1013 22:30:52.674356   55392 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:30:52.674381   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHHostname
	I1013 22:30:52.677643   55392 main.go:141] libmachine: (test-preload-047519) DBG | domain test-preload-047519 has defined MAC address 52:54:00:bd:a5:c9 in network mk-test-preload-047519
	I1013 22:30:52.678058   55392 main.go:141] libmachine: (test-preload-047519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a5:c9", ip: ""} in network mk-test-preload-047519: {Iface:virbr1 ExpiryTime:2025-10-13 23:30:48 +0000 UTC Type:0 Mac:52:54:00:bd:a5:c9 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:test-preload-047519 Clientid:01:52:54:00:bd:a5:c9}
	I1013 22:30:52.678092   55392 main.go:141] libmachine: (test-preload-047519) DBG | domain test-preload-047519 has defined IP address 192.168.39.205 and MAC address 52:54:00:bd:a5:c9 in network mk-test-preload-047519
	I1013 22:30:52.678304   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHPort
	I1013 22:30:52.678503   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHKeyPath
	I1013 22:30:52.678660   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHUsername
	I1013 22:30:52.678761   55392 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/test-preload-047519/id_rsa Username:docker}
	I1013 22:30:52.762228   55392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1013 22:30:52.795251   55392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1013 22:30:52.827316   55392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 22:30:52.859646   55392 provision.go:87] duration metric: took 423.218821ms to configureAuth
	I1013 22:30:52.859681   55392 buildroot.go:189] setting minikube options for container-runtime
	I1013 22:30:52.859842   55392 config.go:182] Loaded profile config "test-preload-047519": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1013 22:30:52.859921   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHHostname
	I1013 22:30:52.862917   55392 main.go:141] libmachine: (test-preload-047519) DBG | domain test-preload-047519 has defined MAC address 52:54:00:bd:a5:c9 in network mk-test-preload-047519
	I1013 22:30:52.863371   55392 main.go:141] libmachine: (test-preload-047519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a5:c9", ip: ""} in network mk-test-preload-047519: {Iface:virbr1 ExpiryTime:2025-10-13 23:30:48 +0000 UTC Type:0 Mac:52:54:00:bd:a5:c9 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:test-preload-047519 Clientid:01:52:54:00:bd:a5:c9}
	I1013 22:30:52.863403   55392 main.go:141] libmachine: (test-preload-047519) DBG | domain test-preload-047519 has defined IP address 192.168.39.205 and MAC address 52:54:00:bd:a5:c9 in network mk-test-preload-047519
	I1013 22:30:52.863594   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHPort
	I1013 22:30:52.863815   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHKeyPath
	I1013 22:30:52.863992   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHKeyPath
	I1013 22:30:52.864138   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHUsername
	I1013 22:30:52.864320   55392 main.go:141] libmachine: Using SSH client type: native
	I1013 22:30:52.864518   55392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1013 22:30:52.864535   55392 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:30:53.110092   55392 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:30:53.110121   55392 machine.go:96] duration metric: took 1.03240184s to provisionDockerMachine
	I1013 22:30:53.110134   55392 start.go:293] postStartSetup for "test-preload-047519" (driver="kvm2")
	I1013 22:30:53.110144   55392 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:30:53.110176   55392 main.go:141] libmachine: (test-preload-047519) Calling .DriverName
	I1013 22:30:53.110494   55392 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:30:53.110525   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHHostname
	I1013 22:30:53.113298   55392 main.go:141] libmachine: (test-preload-047519) DBG | domain test-preload-047519 has defined MAC address 52:54:00:bd:a5:c9 in network mk-test-preload-047519
	I1013 22:30:53.113629   55392 main.go:141] libmachine: (test-preload-047519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a5:c9", ip: ""} in network mk-test-preload-047519: {Iface:virbr1 ExpiryTime:2025-10-13 23:30:48 +0000 UTC Type:0 Mac:52:54:00:bd:a5:c9 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:test-preload-047519 Clientid:01:52:54:00:bd:a5:c9}
	I1013 22:30:53.113657   55392 main.go:141] libmachine: (test-preload-047519) DBG | domain test-preload-047519 has defined IP address 192.168.39.205 and MAC address 52:54:00:bd:a5:c9 in network mk-test-preload-047519
	I1013 22:30:53.113851   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHPort
	I1013 22:30:53.114042   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHKeyPath
	I1013 22:30:53.114237   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHUsername
	I1013 22:30:53.114390   55392 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/test-preload-047519/id_rsa Username:docker}
	I1013 22:30:53.198944   55392 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 22:30:53.204488   55392 info.go:137] Remote host: Buildroot 2025.02
	I1013 22:30:53.204512   55392 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-15625/.minikube/addons for local assets ...
	I1013 22:30:53.204580   55392 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-15625/.minikube/files for local assets ...
	I1013 22:30:53.204651   55392 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-15625/.minikube/files/etc/ssl/certs/199472.pem -> 199472.pem in /etc/ssl/certs
	I1013 22:30:53.204734   55392 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 22:30:53.217315   55392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/files/etc/ssl/certs/199472.pem --> /etc/ssl/certs/199472.pem (1708 bytes)
	I1013 22:30:53.247864   55392 start.go:296] duration metric: took 137.715093ms for postStartSetup
	I1013 22:30:53.247908   55392 fix.go:56] duration metric: took 16.854772669s for fixHost
	I1013 22:30:53.247933   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHHostname
	I1013 22:30:53.250746   55392 main.go:141] libmachine: (test-preload-047519) DBG | domain test-preload-047519 has defined MAC address 52:54:00:bd:a5:c9 in network mk-test-preload-047519
	I1013 22:30:53.251144   55392 main.go:141] libmachine: (test-preload-047519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a5:c9", ip: ""} in network mk-test-preload-047519: {Iface:virbr1 ExpiryTime:2025-10-13 23:30:48 +0000 UTC Type:0 Mac:52:54:00:bd:a5:c9 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:test-preload-047519 Clientid:01:52:54:00:bd:a5:c9}
	I1013 22:30:53.251188   55392 main.go:141] libmachine: (test-preload-047519) DBG | domain test-preload-047519 has defined IP address 192.168.39.205 and MAC address 52:54:00:bd:a5:c9 in network mk-test-preload-047519
	I1013 22:30:53.251371   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHPort
	I1013 22:30:53.251574   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHKeyPath
	I1013 22:30:53.251727   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHKeyPath
	I1013 22:30:53.251896   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHUsername
	I1013 22:30:53.252062   55392 main.go:141] libmachine: Using SSH client type: native
	I1013 22:30:53.252365   55392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1013 22:30:53.252381   55392 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1013 22:30:53.358126   55392 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760394653.321398578
	
	I1013 22:30:53.358150   55392 fix.go:216] guest clock: 1760394653.321398578
	I1013 22:30:53.358178   55392 fix.go:229] Guest: 2025-10-13 22:30:53.321398578 +0000 UTC Remote: 2025-10-13 22:30:53.247913048 +0000 UTC m=+20.188830145 (delta=73.48553ms)
	I1013 22:30:53.358224   55392 fix.go:200] guest clock delta is within tolerance: 73.48553ms
	I1013 22:30:53.358230   55392 start.go:83] releasing machines lock for "test-preload-047519", held for 16.965111348s
	I1013 22:30:53.358253   55392 main.go:141] libmachine: (test-preload-047519) Calling .DriverName
	I1013 22:30:53.358560   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetIP
	I1013 22:30:53.361743   55392 main.go:141] libmachine: (test-preload-047519) DBG | domain test-preload-047519 has defined MAC address 52:54:00:bd:a5:c9 in network mk-test-preload-047519
	I1013 22:30:53.362111   55392 main.go:141] libmachine: (test-preload-047519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a5:c9", ip: ""} in network mk-test-preload-047519: {Iface:virbr1 ExpiryTime:2025-10-13 23:30:48 +0000 UTC Type:0 Mac:52:54:00:bd:a5:c9 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:test-preload-047519 Clientid:01:52:54:00:bd:a5:c9}
	I1013 22:30:53.362169   55392 main.go:141] libmachine: (test-preload-047519) DBG | domain test-preload-047519 has defined IP address 192.168.39.205 and MAC address 52:54:00:bd:a5:c9 in network mk-test-preload-047519
	I1013 22:30:53.362325   55392 main.go:141] libmachine: (test-preload-047519) Calling .DriverName
	I1013 22:30:53.362781   55392 main.go:141] libmachine: (test-preload-047519) Calling .DriverName
	I1013 22:30:53.362964   55392 main.go:141] libmachine: (test-preload-047519) Calling .DriverName
	I1013 22:30:53.363053   55392 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 22:30:53.363110   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHHostname
	I1013 22:30:53.363261   55392 ssh_runner.go:195] Run: cat /version.json
	I1013 22:30:53.363286   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHHostname
	I1013 22:30:53.366303   55392 main.go:141] libmachine: (test-preload-047519) DBG | domain test-preload-047519 has defined MAC address 52:54:00:bd:a5:c9 in network mk-test-preload-047519
	I1013 22:30:53.366329   55392 main.go:141] libmachine: (test-preload-047519) DBG | domain test-preload-047519 has defined MAC address 52:54:00:bd:a5:c9 in network mk-test-preload-047519
	I1013 22:30:53.366777   55392 main.go:141] libmachine: (test-preload-047519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a5:c9", ip: ""} in network mk-test-preload-047519: {Iface:virbr1 ExpiryTime:2025-10-13 23:30:48 +0000 UTC Type:0 Mac:52:54:00:bd:a5:c9 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:test-preload-047519 Clientid:01:52:54:00:bd:a5:c9}
	I1013 22:30:53.366803   55392 main.go:141] libmachine: (test-preload-047519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a5:c9", ip: ""} in network mk-test-preload-047519: {Iface:virbr1 ExpiryTime:2025-10-13 23:30:48 +0000 UTC Type:0 Mac:52:54:00:bd:a5:c9 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:test-preload-047519 Clientid:01:52:54:00:bd:a5:c9}
	I1013 22:30:53.366825   55392 main.go:141] libmachine: (test-preload-047519) DBG | domain test-preload-047519 has defined IP address 192.168.39.205 and MAC address 52:54:00:bd:a5:c9 in network mk-test-preload-047519
	I1013 22:30:53.366848   55392 main.go:141] libmachine: (test-preload-047519) DBG | domain test-preload-047519 has defined IP address 192.168.39.205 and MAC address 52:54:00:bd:a5:c9 in network mk-test-preload-047519
	I1013 22:30:53.367063   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHPort
	I1013 22:30:53.367151   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHPort
	I1013 22:30:53.367276   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHKeyPath
	I1013 22:30:53.367340   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHKeyPath
	I1013 22:30:53.367427   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHUsername
	I1013 22:30:53.367436   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHUsername
	I1013 22:30:53.367577   55392 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/test-preload-047519/id_rsa Username:docker}
	I1013 22:30:53.367623   55392 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/test-preload-047519/id_rsa Username:docker}
	I1013 22:30:53.446747   55392 ssh_runner.go:195] Run: systemctl --version
	I1013 22:30:53.476270   55392 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 22:30:53.624785   55392 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 22:30:53.632439   55392 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 22:30:53.632499   55392 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 22:30:53.652961   55392 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1013 22:30:53.652987   55392 start.go:495] detecting cgroup driver to use...
	I1013 22:30:53.653046   55392 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 22:30:53.672784   55392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 22:30:53.690078   55392 docker.go:218] disabling cri-docker service (if available) ...
	I1013 22:30:53.690154   55392 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 22:30:53.708039   55392 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 22:30:53.725100   55392 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 22:30:53.872371   55392 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:30:54.094695   55392 docker.go:234] disabling docker service ...
	I1013 22:30:54.094763   55392 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:30:54.112524   55392 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:30:54.128094   55392 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:30:54.281967   55392 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:30:54.430127   55392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:30:54.449280   55392 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:30:54.473492   55392 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1013 22:30:54.473561   55392 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:30:54.486117   55392 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 22:30:54.486198   55392 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:30:54.499204   55392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:30:54.512125   55392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:30:54.524812   55392 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:30:54.538494   55392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:30:54.551379   55392 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:30:54.572894   55392 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:30:54.586344   55392 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:30:54.596867   55392 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1013 22:30:54.596935   55392 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1013 22:30:54.617392   55392 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:30:54.629708   55392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:30:54.773330   55392 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:30:54.904046   55392 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:30:54.904129   55392 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:30:54.910052   55392 start.go:563] Will wait 60s for crictl version
	I1013 22:30:54.910111   55392 ssh_runner.go:195] Run: which crictl
	I1013 22:30:54.914458   55392 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1013 22:30:54.956487   55392 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1013 22:30:54.956555   55392 ssh_runner.go:195] Run: crio --version
	I1013 22:30:54.987609   55392 ssh_runner.go:195] Run: crio --version
	I1013 22:30:55.020254   55392 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1013 22:30:55.021518   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetIP
	I1013 22:30:55.025005   55392 main.go:141] libmachine: (test-preload-047519) DBG | domain test-preload-047519 has defined MAC address 52:54:00:bd:a5:c9 in network mk-test-preload-047519
	I1013 22:30:55.025424   55392 main.go:141] libmachine: (test-preload-047519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a5:c9", ip: ""} in network mk-test-preload-047519: {Iface:virbr1 ExpiryTime:2025-10-13 23:30:48 +0000 UTC Type:0 Mac:52:54:00:bd:a5:c9 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:test-preload-047519 Clientid:01:52:54:00:bd:a5:c9}
	I1013 22:30:55.025457   55392 main.go:141] libmachine: (test-preload-047519) DBG | domain test-preload-047519 has defined IP address 192.168.39.205 and MAC address 52:54:00:bd:a5:c9 in network mk-test-preload-047519
	I1013 22:30:55.025703   55392 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1013 22:30:55.030504   55392 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:30:55.046235   55392 kubeadm.go:883] updating cluster {Name:test-preload-047519 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-047519 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:30:55.046355   55392 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1013 22:30:55.046401   55392 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:30:55.086862   55392 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1013 22:30:55.086951   55392 ssh_runner.go:195] Run: which lz4
	I1013 22:30:55.091753   55392 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1013 22:30:55.097125   55392 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1013 22:30:55.097172   55392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1013 22:30:56.670445   55392 crio.go:462] duration metric: took 1.57872138s to copy over tarball
	I1013 22:30:56.670512   55392 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1013 22:30:58.367989   55392 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.697448719s)
	I1013 22:30:58.368027   55392 crio.go:469] duration metric: took 1.697555355s to extract the tarball
	I1013 22:30:58.368036   55392 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1013 22:30:58.410042   55392 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:30:58.453478   55392 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:30:58.453503   55392 cache_images.go:85] Images are preloaded, skipping loading
	I1013 22:30:58.453512   55392 kubeadm.go:934] updating node { 192.168.39.205 8443 v1.32.0 crio true true} ...
	I1013 22:30:58.453618   55392 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-047519 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.205
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-047519 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 22:30:58.453697   55392 ssh_runner.go:195] Run: crio config
	I1013 22:30:58.500586   55392 cni.go:84] Creating CNI manager for ""
	I1013 22:30:58.500609   55392 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1013 22:30:58.500624   55392 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 22:30:58.500644   55392 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.205 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-047519 NodeName:test-preload-047519 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.205"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.205 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:30:58.500760   55392 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.205
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-047519"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.205"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.205"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:30:58.500828   55392 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1013 22:30:58.513566   55392 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 22:30:58.513626   55392 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:30:58.525454   55392 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1013 22:30:58.545880   55392 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:30:58.566506   55392 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1013 22:30:58.587201   55392 ssh_runner.go:195] Run: grep 192.168.39.205	control-plane.minikube.internal$ /etc/hosts
	I1013 22:30:58.591609   55392 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.205	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:30:58.606710   55392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:30:58.747104   55392 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:30:58.767828   55392 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/test-preload-047519 for IP: 192.168.39.205
	I1013 22:30:58.767855   55392 certs.go:195] generating shared ca certs ...
	I1013 22:30:58.767877   55392 certs.go:227] acquiring lock for ca certs: {Name:mk571ab777fecb8fbabce6e1d2676cf4c099bd41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:30:58.768085   55392 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-15625/.minikube/ca.key
	I1013 22:30:58.768188   55392 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-15625/.minikube/proxy-client-ca.key
	I1013 22:30:58.768209   55392 certs.go:257] generating profile certs ...
	I1013 22:30:58.768336   55392 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/test-preload-047519/client.key
	I1013 22:30:58.768404   55392 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/test-preload-047519/apiserver.key.ba393037
	I1013 22:30:58.768456   55392 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/test-preload-047519/proxy-client.key
	I1013 22:30:58.768621   55392 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/19947.pem (1338 bytes)
	W1013 22:30:58.768673   55392 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-15625/.minikube/certs/19947_empty.pem, impossibly tiny 0 bytes
	I1013 22:30:58.768687   55392 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 22:30:58.768726   55392 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem (1078 bytes)
	I1013 22:30:58.768760   55392 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:30:58.768797   55392 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/key.pem (1675 bytes)
	I1013 22:30:58.768860   55392 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/files/etc/ssl/certs/199472.pem (1708 bytes)
	I1013 22:30:58.769671   55392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:30:58.812421   55392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 22:30:58.846039   55392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:30:58.878482   55392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 22:30:58.908464   55392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/test-preload-047519/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1013 22:30:58.937346   55392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/test-preload-047519/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 22:30:58.966959   55392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/test-preload-047519/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:30:58.997385   55392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/test-preload-047519/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1013 22:30:59.027701   55392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/certs/19947.pem --> /usr/share/ca-certificates/19947.pem (1338 bytes)
	I1013 22:30:59.060191   55392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/files/etc/ssl/certs/199472.pem --> /usr/share/ca-certificates/199472.pem (1708 bytes)
	I1013 22:30:59.093921   55392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:30:59.125261   55392 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:30:59.146357   55392 ssh_runner.go:195] Run: openssl version
	I1013 22:30:59.153058   55392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19947.pem && ln -fs /usr/share/ca-certificates/19947.pem /etc/ssl/certs/19947.pem"
	I1013 22:30:59.166358   55392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19947.pem
	I1013 22:30:59.171689   55392 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:27 /usr/share/ca-certificates/19947.pem
	I1013 22:30:59.171738   55392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19947.pem
	I1013 22:30:59.179054   55392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19947.pem /etc/ssl/certs/51391683.0"
	I1013 22:30:59.192302   55392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/199472.pem && ln -fs /usr/share/ca-certificates/199472.pem /etc/ssl/certs/199472.pem"
	I1013 22:30:59.205371   55392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/199472.pem
	I1013 22:30:59.210545   55392 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:27 /usr/share/ca-certificates/199472.pem
	I1013 22:30:59.210600   55392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/199472.pem
	I1013 22:30:59.217723   55392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/199472.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 22:30:59.231421   55392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:30:59.245008   55392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:30:59.250439   55392 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 21:18 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:30:59.250480   55392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:30:59.257780   55392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:30:59.271821   55392 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:30:59.277303   55392 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1013 22:30:59.284764   55392 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1013 22:30:59.292208   55392 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1013 22:30:59.299705   55392 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1013 22:30:59.307195   55392 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1013 22:30:59.314603   55392 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1013 22:30:59.321720   55392 kubeadm.go:400] StartCluster: {Name:test-preload-047519 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-047519 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:30:59.321789   55392 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:30:59.321847   55392 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:30:59.363221   55392 cri.go:89] found id: ""
	I1013 22:30:59.363301   55392 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 22:30:59.376501   55392 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1013 22:30:59.376522   55392 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1013 22:30:59.376570   55392 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1013 22:30:59.389297   55392 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1013 22:30:59.389714   55392 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-047519" does not appear in /home/jenkins/minikube-integration/21724-15625/kubeconfig
	I1013 22:30:59.389862   55392 kubeconfig.go:62] /home/jenkins/minikube-integration/21724-15625/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-047519" cluster setting kubeconfig missing "test-preload-047519" context setting]
	I1013 22:30:59.390173   55392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-15625/kubeconfig: {Name:mkba5ceb9d6438ffa1375fb51eda64fa770df7b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:30:59.390658   55392 kapi.go:59] client config for test-preload-047519: &rest.Config{Host:"https://192.168.39.205:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21724-15625/.minikube/profiles/test-preload-047519/client.crt", KeyFile:"/home/jenkins/minikube-integration/21724-15625/.minikube/profiles/test-preload-047519/client.key", CAFile:"/home/jenkins/minikube-integration/21724-15625/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1013 22:30:59.391092   55392 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1013 22:30:59.391107   55392 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1013 22:30:59.391111   55392 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1013 22:30:59.391116   55392 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1013 22:30:59.391120   55392 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1013 22:30:59.391421   55392 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1013 22:30:59.403146   55392 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.39.205
	I1013 22:30:59.403213   55392 kubeadm.go:1160] stopping kube-system containers ...
	I1013 22:30:59.403227   55392 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1013 22:30:59.403277   55392 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:30:59.445214   55392 cri.go:89] found id: ""
	I1013 22:30:59.445284   55392 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1013 22:30:59.471257   55392 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 22:30:59.483685   55392 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 22:30:59.483709   55392 kubeadm.go:157] found existing configuration files:
	
	I1013 22:30:59.483752   55392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 22:30:59.494626   55392 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 22:30:59.494694   55392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 22:30:59.506176   55392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 22:30:59.517281   55392 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 22:30:59.517330   55392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 22:30:59.529361   55392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 22:30:59.540431   55392 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 22:30:59.540488   55392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 22:30:59.552258   55392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 22:30:59.563127   55392 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 22:30:59.563200   55392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 22:30:59.574885   55392 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 22:30:59.586553   55392 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1013 22:30:59.643930   55392 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1013 22:31:00.792003   55392 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.148041114s)
	I1013 22:31:00.792085   55392 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1013 22:31:01.042744   55392 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1013 22:31:01.120212   55392 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1013 22:31:01.224400   55392 api_server.go:52] waiting for apiserver process to appear ...
	I1013 22:31:01.224488   55392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:31:01.724540   55392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:31:02.224561   55392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:31:02.724895   55392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:31:03.224731   55392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:31:03.724911   55392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:31:03.747722   55392 api_server.go:72] duration metric: took 2.523331719s to wait for apiserver process to appear ...
	I1013 22:31:03.747747   55392 api_server.go:88] waiting for apiserver healthz status ...
	I1013 22:31:03.747769   55392 api_server.go:253] Checking apiserver healthz at https://192.168.39.205:8443/healthz ...
	I1013 22:31:05.848699   55392 api_server.go:279] https://192.168.39.205:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1013 22:31:05.848725   55392 api_server.go:103] status: https://192.168.39.205:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1013 22:31:05.848743   55392 api_server.go:253] Checking apiserver healthz at https://192.168.39.205:8443/healthz ...
	I1013 22:31:05.964722   55392 api_server.go:279] https://192.168.39.205:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1013 22:31:05.964749   55392 api_server.go:103] status: https://192.168.39.205:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1013 22:31:06.248207   55392 api_server.go:253] Checking apiserver healthz at https://192.168.39.205:8443/healthz ...
	I1013 22:31:06.252774   55392 api_server.go:279] https://192.168.39.205:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 22:31:06.252799   55392 api_server.go:103] status: https://192.168.39.205:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 22:31:06.748837   55392 api_server.go:253] Checking apiserver healthz at https://192.168.39.205:8443/healthz ...
	I1013 22:31:06.754167   55392 api_server.go:279] https://192.168.39.205:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 22:31:06.754196   55392 api_server.go:103] status: https://192.168.39.205:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 22:31:07.247862   55392 api_server.go:253] Checking apiserver healthz at https://192.168.39.205:8443/healthz ...
	I1013 22:31:07.253620   55392 api_server.go:279] https://192.168.39.205:8443/healthz returned 200:
	ok
	I1013 22:31:07.261584   55392 api_server.go:141] control plane version: v1.32.0
	I1013 22:31:07.261610   55392 api_server.go:131] duration metric: took 3.513855457s to wait for apiserver health ...
	I1013 22:31:07.261621   55392 cni.go:84] Creating CNI manager for ""
	I1013 22:31:07.261629   55392 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1013 22:31:07.263415   55392 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1013 22:31:07.264603   55392 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1013 22:31:07.281630   55392 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1013 22:31:07.316005   55392 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 22:31:07.321172   55392 system_pods.go:59] 7 kube-system pods found
	I1013 22:31:07.321220   55392 system_pods.go:61] "coredns-668d6bf9bc-l2kbb" [3985e1b5-e0db-47f2-9570-72f559d341f4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:31:07.321232   55392 system_pods.go:61] "etcd-test-preload-047519" [9e04927c-144b-4ddc-8cff-2aacb135bed0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 22:31:07.321243   55392 system_pods.go:61] "kube-apiserver-test-preload-047519" [da41cac5-e738-45a8-999b-491df0f6ddf4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 22:31:07.321252   55392 system_pods.go:61] "kube-controller-manager-test-preload-047519" [6a24fc74-7f9e-4e2f-ac5e-d7c7be738b3e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 22:31:07.321259   55392 system_pods.go:61] "kube-proxy-c75c9" [83b89e0a-db26-44f3-8208-4c14e5f72b6d] Running
	I1013 22:31:07.321269   55392 system_pods.go:61] "kube-scheduler-test-preload-047519" [3a9d4bf4-5f0d-484f-a5cd-266782a5fd91] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 22:31:07.321275   55392 system_pods.go:61] "storage-provisioner" [80d91d67-e34a-40de-a284-edd177e765e1] Running
	I1013 22:31:07.321286   55392 system_pods.go:74] duration metric: took 5.251888ms to wait for pod list to return data ...
	I1013 22:31:07.321296   55392 node_conditions.go:102] verifying NodePressure condition ...
	I1013 22:31:07.325476   55392 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1013 22:31:07.325506   55392 node_conditions.go:123] node cpu capacity is 2
	I1013 22:31:07.325522   55392 node_conditions.go:105] duration metric: took 4.218998ms to run NodePressure ...
	I1013 22:31:07.325578   55392 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1013 22:31:07.589175   55392 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1013 22:31:07.593034   55392 kubeadm.go:743] kubelet initialised
	I1013 22:31:07.593059   55392 kubeadm.go:744] duration metric: took 3.854468ms waiting for restarted kubelet to initialise ...
	I1013 22:31:07.593075   55392 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 22:31:07.609620   55392 ops.go:34] apiserver oom_adj: -16
	I1013 22:31:07.609649   55392 kubeadm.go:601] duration metric: took 8.233120086s to restartPrimaryControlPlane
	I1013 22:31:07.609660   55392 kubeadm.go:402] duration metric: took 8.287944311s to StartCluster
	I1013 22:31:07.609675   55392 settings.go:142] acquiring lock: {Name:mk429dcebf497c5553c28c0bde1089c59d439da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:31:07.609766   55392 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-15625/kubeconfig
	I1013 22:31:07.610361   55392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-15625/kubeconfig: {Name:mkba5ceb9d6438ffa1375fb51eda64fa770df7b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:31:07.610606   55392 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:31:07.610662   55392 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 22:31:07.610751   55392 addons.go:69] Setting storage-provisioner=true in profile "test-preload-047519"
	I1013 22:31:07.610770   55392 addons.go:238] Setting addon storage-provisioner=true in "test-preload-047519"
	W1013 22:31:07.610781   55392 addons.go:247] addon storage-provisioner should already be in state true
	I1013 22:31:07.610781   55392 addons.go:69] Setting default-storageclass=true in profile "test-preload-047519"
	I1013 22:31:07.610811   55392 host.go:66] Checking if "test-preload-047519" exists ...
	I1013 22:31:07.610813   55392 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-047519"
	I1013 22:31:07.610832   55392 config.go:182] Loaded profile config "test-preload-047519": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1013 22:31:07.611205   55392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 22:31:07.611247   55392 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 22:31:07.611300   55392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 22:31:07.611347   55392 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 22:31:07.615701   55392 out.go:179] * Verifying Kubernetes components...
	I1013 22:31:07.617235   55392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:31:07.624239   55392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35293
	I1013 22:31:07.624277   55392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39903
	I1013 22:31:07.624671   55392 main.go:141] libmachine: () Calling .GetVersion
	I1013 22:31:07.624681   55392 main.go:141] libmachine: () Calling .GetVersion
	I1013 22:31:07.625381   55392 main.go:141] libmachine: Using API Version  1
	I1013 22:31:07.625399   55392 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 22:31:07.625404   55392 main.go:141] libmachine: Using API Version  1
	I1013 22:31:07.625424   55392 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 22:31:07.625760   55392 main.go:141] libmachine: () Calling .GetMachineName
	I1013 22:31:07.625794   55392 main.go:141] libmachine: () Calling .GetMachineName
	I1013 22:31:07.625974   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetState
	I1013 22:31:07.626472   55392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 22:31:07.626507   55392 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 22:31:07.628669   55392 kapi.go:59] client config for test-preload-047519: &rest.Config{Host:"https://192.168.39.205:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21724-15625/.minikube/profiles/test-preload-047519/client.crt", KeyFile:"/home/jenkins/minikube-integration/21724-15625/.minikube/profiles/test-preload-047519/client.key", CAFile:"/home/jenkins/minikube-integration/21724-15625/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1013 22:31:07.629037   55392 addons.go:238] Setting addon default-storageclass=true in "test-preload-047519"
	W1013 22:31:07.629057   55392 addons.go:247] addon default-storageclass should already be in state true
	I1013 22:31:07.629083   55392 host.go:66] Checking if "test-preload-047519" exists ...
	I1013 22:31:07.629480   55392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 22:31:07.629515   55392 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 22:31:07.640501   55392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45831
	I1013 22:31:07.641035   55392 main.go:141] libmachine: () Calling .GetVersion
	I1013 22:31:07.641520   55392 main.go:141] libmachine: Using API Version  1
	I1013 22:31:07.641545   55392 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 22:31:07.641841   55392 main.go:141] libmachine: () Calling .GetMachineName
	I1013 22:31:07.642019   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetState
	I1013 22:31:07.642873   55392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36055
	I1013 22:31:07.643288   55392 main.go:141] libmachine: () Calling .GetVersion
	I1013 22:31:07.643740   55392 main.go:141] libmachine: Using API Version  1
	I1013 22:31:07.643764   55392 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 22:31:07.644080   55392 main.go:141] libmachine: (test-preload-047519) Calling .DriverName
	I1013 22:31:07.644211   55392 main.go:141] libmachine: () Calling .GetMachineName
	I1013 22:31:07.644765   55392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 22:31:07.644800   55392 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 22:31:07.646106   55392 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:31:07.647553   55392 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:31:07.647569   55392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 22:31:07.647583   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHHostname
	I1013 22:31:07.650695   55392 main.go:141] libmachine: (test-preload-047519) DBG | domain test-preload-047519 has defined MAC address 52:54:00:bd:a5:c9 in network mk-test-preload-047519
	I1013 22:31:07.651153   55392 main.go:141] libmachine: (test-preload-047519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a5:c9", ip: ""} in network mk-test-preload-047519: {Iface:virbr1 ExpiryTime:2025-10-13 23:30:48 +0000 UTC Type:0 Mac:52:54:00:bd:a5:c9 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:test-preload-047519 Clientid:01:52:54:00:bd:a5:c9}
	I1013 22:31:07.651191   55392 main.go:141] libmachine: (test-preload-047519) DBG | domain test-preload-047519 has defined IP address 192.168.39.205 and MAC address 52:54:00:bd:a5:c9 in network mk-test-preload-047519
	I1013 22:31:07.651391   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHPort
	I1013 22:31:07.651566   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHKeyPath
	I1013 22:31:07.651729   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHUsername
	I1013 22:31:07.651891   55392 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/test-preload-047519/id_rsa Username:docker}
	I1013 22:31:07.658278   55392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35355
	I1013 22:31:07.658700   55392 main.go:141] libmachine: () Calling .GetVersion
	I1013 22:31:07.659126   55392 main.go:141] libmachine: Using API Version  1
	I1013 22:31:07.659144   55392 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 22:31:07.659524   55392 main.go:141] libmachine: () Calling .GetMachineName
	I1013 22:31:07.659741   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetState
	I1013 22:31:07.661554   55392 main.go:141] libmachine: (test-preload-047519) Calling .DriverName
	I1013 22:31:07.661759   55392 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 22:31:07.661775   55392 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 22:31:07.661793   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHHostname
	I1013 22:31:07.665140   55392 main.go:141] libmachine: (test-preload-047519) DBG | domain test-preload-047519 has defined MAC address 52:54:00:bd:a5:c9 in network mk-test-preload-047519
	I1013 22:31:07.665643   55392 main.go:141] libmachine: (test-preload-047519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:a5:c9", ip: ""} in network mk-test-preload-047519: {Iface:virbr1 ExpiryTime:2025-10-13 23:30:48 +0000 UTC Type:0 Mac:52:54:00:bd:a5:c9 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:test-preload-047519 Clientid:01:52:54:00:bd:a5:c9}
	I1013 22:31:07.665663   55392 main.go:141] libmachine: (test-preload-047519) DBG | domain test-preload-047519 has defined IP address 192.168.39.205 and MAC address 52:54:00:bd:a5:c9 in network mk-test-preload-047519
	I1013 22:31:07.665907   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHPort
	I1013 22:31:07.666073   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHKeyPath
	I1013 22:31:07.666264   55392 main.go:141] libmachine: (test-preload-047519) Calling .GetSSHUsername
	I1013 22:31:07.666400   55392 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/test-preload-047519/id_rsa Username:docker}
	I1013 22:31:07.817278   55392 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:31:07.838250   55392 node_ready.go:35] waiting up to 6m0s for node "test-preload-047519" to be "Ready" ...
	I1013 22:31:07.972482   55392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:31:07.974312   55392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 22:31:08.747502   55392 main.go:141] libmachine: Making call to close driver server
	I1013 22:31:08.747527   55392 main.go:141] libmachine: Making call to close driver server
	I1013 22:31:08.747545   55392 main.go:141] libmachine: (test-preload-047519) Calling .Close
	I1013 22:31:08.747536   55392 main.go:141] libmachine: (test-preload-047519) Calling .Close
	I1013 22:31:08.747830   55392 main.go:141] libmachine: (test-preload-047519) DBG | Closing plugin on server side
	I1013 22:31:08.747832   55392 main.go:141] libmachine: Successfully made call to close driver server
	I1013 22:31:08.747851   55392 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 22:31:08.747860   55392 main.go:141] libmachine: Making call to close driver server
	I1013 22:31:08.747867   55392 main.go:141] libmachine: (test-preload-047519) Calling .Close
	I1013 22:31:08.747868   55392 main.go:141] libmachine: (test-preload-047519) DBG | Closing plugin on server side
	I1013 22:31:08.747891   55392 main.go:141] libmachine: Successfully made call to close driver server
	I1013 22:31:08.747899   55392 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 22:31:08.747907   55392 main.go:141] libmachine: Making call to close driver server
	I1013 22:31:08.747914   55392 main.go:141] libmachine: (test-preload-047519) Calling .Close
	I1013 22:31:08.748086   55392 main.go:141] libmachine: Successfully made call to close driver server
	I1013 22:31:08.748100   55392 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 22:31:08.748239   55392 main.go:141] libmachine: (test-preload-047519) DBG | Closing plugin on server side
	I1013 22:31:08.748262   55392 main.go:141] libmachine: Successfully made call to close driver server
	I1013 22:31:08.748269   55392 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 22:31:08.766337   55392 main.go:141] libmachine: Making call to close driver server
	I1013 22:31:08.766361   55392 main.go:141] libmachine: (test-preload-047519) Calling .Close
	I1013 22:31:08.766659   55392 main.go:141] libmachine: Successfully made call to close driver server
	I1013 22:31:08.766677   55392 main.go:141] libmachine: Making call to close connection to plugin binary
	I1013 22:31:08.766697   55392 main.go:141] libmachine: (test-preload-047519) DBG | Closing plugin on server side
	I1013 22:31:08.768511   55392 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1013 22:31:08.769809   55392 addons.go:514] duration metric: took 1.159146021s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1013 22:31:09.841325   55392 node_ready.go:57] node "test-preload-047519" has "Ready":"False" status (will retry)
	W1013 22:31:11.841667   55392 node_ready.go:57] node "test-preload-047519" has "Ready":"False" status (will retry)
	W1013 22:31:14.342459   55392 node_ready.go:57] node "test-preload-047519" has "Ready":"False" status (will retry)
	I1013 22:31:16.345491   55392 node_ready.go:49] node "test-preload-047519" is "Ready"
	I1013 22:31:16.345529   55392 node_ready.go:38] duration metric: took 8.50724749s for node "test-preload-047519" to be "Ready" ...
	I1013 22:31:16.345548   55392 api_server.go:52] waiting for apiserver process to appear ...
	I1013 22:31:16.345599   55392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:31:16.368648   55392 api_server.go:72] duration metric: took 8.758007383s to wait for apiserver process to appear ...
	I1013 22:31:16.368677   55392 api_server.go:88] waiting for apiserver healthz status ...
	I1013 22:31:16.368692   55392 api_server.go:253] Checking apiserver healthz at https://192.168.39.205:8443/healthz ...
	I1013 22:31:16.374246   55392 api_server.go:279] https://192.168.39.205:8443/healthz returned 200:
	ok
	I1013 22:31:16.375522   55392 api_server.go:141] control plane version: v1.32.0
	I1013 22:31:16.375544   55392 api_server.go:131] duration metric: took 6.86144ms to wait for apiserver health ...
	I1013 22:31:16.375553   55392 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 22:31:16.379318   55392 system_pods.go:59] 7 kube-system pods found
	I1013 22:31:16.379342   55392 system_pods.go:61] "coredns-668d6bf9bc-l2kbb" [3985e1b5-e0db-47f2-9570-72f559d341f4] Running
	I1013 22:31:16.379347   55392 system_pods.go:61] "etcd-test-preload-047519" [9e04927c-144b-4ddc-8cff-2aacb135bed0] Running
	I1013 22:31:16.379358   55392 system_pods.go:61] "kube-apiserver-test-preload-047519" [da41cac5-e738-45a8-999b-491df0f6ddf4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 22:31:16.379366   55392 system_pods.go:61] "kube-controller-manager-test-preload-047519" [6a24fc74-7f9e-4e2f-ac5e-d7c7be738b3e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 22:31:16.379372   55392 system_pods.go:61] "kube-proxy-c75c9" [83b89e0a-db26-44f3-8208-4c14e5f72b6d] Running
	I1013 22:31:16.379376   55392 system_pods.go:61] "kube-scheduler-test-preload-047519" [3a9d4bf4-5f0d-484f-a5cd-266782a5fd91] Running
	I1013 22:31:16.379383   55392 system_pods.go:61] "storage-provisioner" [80d91d67-e34a-40de-a284-edd177e765e1] Running
	I1013 22:31:16.379389   55392 system_pods.go:74] duration metric: took 3.831461ms to wait for pod list to return data ...
	I1013 22:31:16.379400   55392 default_sa.go:34] waiting for default service account to be created ...
	I1013 22:31:16.382094   55392 default_sa.go:45] found service account: "default"
	I1013 22:31:16.382113   55392 default_sa.go:55] duration metric: took 2.708092ms for default service account to be created ...
	I1013 22:31:16.382121   55392 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 22:31:16.385182   55392 system_pods.go:86] 7 kube-system pods found
	I1013 22:31:16.385209   55392 system_pods.go:89] "coredns-668d6bf9bc-l2kbb" [3985e1b5-e0db-47f2-9570-72f559d341f4] Running
	I1013 22:31:16.385215   55392 system_pods.go:89] "etcd-test-preload-047519" [9e04927c-144b-4ddc-8cff-2aacb135bed0] Running
	I1013 22:31:16.385221   55392 system_pods.go:89] "kube-apiserver-test-preload-047519" [da41cac5-e738-45a8-999b-491df0f6ddf4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 22:31:16.385226   55392 system_pods.go:89] "kube-controller-manager-test-preload-047519" [6a24fc74-7f9e-4e2f-ac5e-d7c7be738b3e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 22:31:16.385232   55392 system_pods.go:89] "kube-proxy-c75c9" [83b89e0a-db26-44f3-8208-4c14e5f72b6d] Running
	I1013 22:31:16.385236   55392 system_pods.go:89] "kube-scheduler-test-preload-047519" [3a9d4bf4-5f0d-484f-a5cd-266782a5fd91] Running
	I1013 22:31:16.385239   55392 system_pods.go:89] "storage-provisioner" [80d91d67-e34a-40de-a284-edd177e765e1] Running
	I1013 22:31:16.385246   55392 system_pods.go:126] duration metric: took 3.120197ms to wait for k8s-apps to be running ...
	I1013 22:31:16.385253   55392 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 22:31:16.385294   55392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:31:16.402661   55392 system_svc.go:56] duration metric: took 17.395589ms WaitForService to wait for kubelet
	I1013 22:31:16.402695   55392 kubeadm.go:586] duration metric: took 8.79206069s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:31:16.402720   55392 node_conditions.go:102] verifying NodePressure condition ...
	I1013 22:31:16.405318   55392 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1013 22:31:16.405346   55392 node_conditions.go:123] node cpu capacity is 2
	I1013 22:31:16.405362   55392 node_conditions.go:105] duration metric: took 2.636242ms to run NodePressure ...
	I1013 22:31:16.405378   55392 start.go:241] waiting for startup goroutines ...
	I1013 22:31:16.405389   55392 start.go:246] waiting for cluster config update ...
	I1013 22:31:16.405407   55392 start.go:255] writing updated cluster config ...
	I1013 22:31:16.405754   55392 ssh_runner.go:195] Run: rm -f paused
	I1013 22:31:16.412969   55392 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:31:16.413493   55392 kapi.go:59] client config for test-preload-047519: &rest.Config{Host:"https://192.168.39.205:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21724-15625/.minikube/profiles/test-preload-047519/client.crt", KeyFile:"/home/jenkins/minikube-integration/21724-15625/.minikube/profiles/test-preload-047519/client.key", CAFile:"/home/jenkins/minikube-integration/21724-15625/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1013 22:31:16.416255   55392 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-l2kbb" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:31:16.423286   55392 pod_ready.go:94] pod "coredns-668d6bf9bc-l2kbb" is "Ready"
	I1013 22:31:16.423315   55392 pod_ready.go:86] duration metric: took 7.032619ms for pod "coredns-668d6bf9bc-l2kbb" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:31:16.479743   55392 pod_ready.go:83] waiting for pod "etcd-test-preload-047519" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:31:16.485322   55392 pod_ready.go:94] pod "etcd-test-preload-047519" is "Ready"
	I1013 22:31:16.485346   55392 pod_ready.go:86] duration metric: took 5.579892ms for pod "etcd-test-preload-047519" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:31:16.487888   55392 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-047519" in "kube-system" namespace to be "Ready" or be gone ...
	W1013 22:31:18.494310   55392 pod_ready.go:104] pod "kube-apiserver-test-preload-047519" is not "Ready", error: <nil>
	I1013 22:31:20.495751   55392 pod_ready.go:94] pod "kube-apiserver-test-preload-047519" is "Ready"
	I1013 22:31:20.495775   55392 pod_ready.go:86] duration metric: took 4.007869022s for pod "kube-apiserver-test-preload-047519" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:31:20.498090   55392 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-047519" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:31:20.501997   55392 pod_ready.go:94] pod "kube-controller-manager-test-preload-047519" is "Ready"
	I1013 22:31:20.502016   55392 pod_ready.go:86] duration metric: took 3.903245ms for pod "kube-controller-manager-test-preload-047519" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:31:20.504522   55392 pod_ready.go:83] waiting for pod "kube-proxy-c75c9" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:31:20.617228   55392 pod_ready.go:94] pod "kube-proxy-c75c9" is "Ready"
	I1013 22:31:20.617255   55392 pod_ready.go:86] duration metric: took 112.711433ms for pod "kube-proxy-c75c9" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:31:20.817724   55392 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-047519" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:31:21.218035   55392 pod_ready.go:94] pod "kube-scheduler-test-preload-047519" is "Ready"
	I1013 22:31:21.218073   55392 pod_ready.go:86] duration metric: took 400.324491ms for pod "kube-scheduler-test-preload-047519" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:31:21.218088   55392 pod_ready.go:40] duration metric: took 4.805081703s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:31:21.257720   55392 start.go:624] kubectl: 1.34.1, cluster: 1.32.0 (minor skew: 2)
	I1013 22:31:21.259239   55392 out.go:203] 
	W1013 22:31:21.260493   55392 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.32.0.
	I1013 22:31:21.261814   55392 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1013 22:31:21.263351   55392 out.go:179] * Done! kubectl is now configured to use "test-preload-047519" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 13 22:31:22 test-preload-047519 crio[830]: time="2025-10-13 22:31:22.203811476Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=47b0cc66-0245-402e-a856-48265c0a0390 name=/runtime.v1.RuntimeService/Version
	Oct 13 22:31:22 test-preload-047519 crio[830]: time="2025-10-13 22:31:22.205111459Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=68b4bd59-72e0-4ba8-9982-bb46ebd98c0c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 13 22:31:22 test-preload-047519 crio[830]: time="2025-10-13 22:31:22.205616659Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760394682205592671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=68b4bd59-72e0-4ba8-9982-bb46ebd98c0c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 13 22:31:22 test-preload-047519 crio[830]: time="2025-10-13 22:31:22.206169333Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5f94a225-9bf7-4f75-9b3f-614fb70779b8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 22:31:22 test-preload-047519 crio[830]: time="2025-10-13 22:31:22.206219382Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5f94a225-9bf7-4f75-9b3f-614fb70779b8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 22:31:22 test-preload-047519 crio[830]: time="2025-10-13 22:31:22.206411637Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:51e5f4ae535ef41dca154748d03fdbcf8c0dfbf301b4dd3688fcadeb6ffd3995,PodSandboxId:a7573e3bdb32ba30b0dd88f90b248eae6f6871dfdaf056a89bf11900470256d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760394674205347149,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-l2kbb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3985e1b5-e0db-47f2-9570-72f559d341f4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0907c20a93e52a544ca5a946b6546411fb4e15533c0d4b96282fa40fe8a3e0c6,PodSandboxId:9213d40f92372c8b8c2845c3a90126322e921fb631ce32ec3af9ef58cf302b76,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760394666554811843,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c75c9,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 83b89e0a-db26-44f3-8208-4c14e5f72b6d,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84bcd51c0a911f1fac50ae9be9a524f554a7956abea7b54990463665b506a32b,PodSandboxId:420d27ddd07c46182717105f0a5f686752c3478c300532a376e84bd34b260b94,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760394666548977309,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80
d91d67-e34a-40de-a284-edd177e765e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5c7101af4bbce8d13246d79d0b2dc97cdea40a264dbeb84c416057db39a9dc8,PodSandboxId:c87d66c41347dca9a537d69bdf252369d703c7e6a9d08eb6289a79f200de42fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760394663126670679,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-047519,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c3c2ba7970ac322a35c7b069ed734f4,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b8b97600cbdfb637cb3df1839cc3a9ffd21b08c389b4b1bbec81cc9ed2a6e7,PodSandboxId:3f36dca48bbb9df55118ef187afdea221b393c09ae7ed7f1781327c59f64e65d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760394663130391965,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-047519,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: addc7e91e38bd583bad0956
f87a8bad8,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:224e19f36d1a263038677a361b1e13f610cd9b862c1f642912e4afc6891a8ba7,PodSandboxId:267941ddd5a9c5e754b36c1e4cec24d0cbd9b3c3b055a001abca9f3befde40ad,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760394663093856228,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-047519,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8029c997f854ab3db462ee34868e7532,}
,Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4a52847b403feddf107fd1ef93fd5ff80f3aa4b2b3b8c622771997613870fe5,PodSandboxId:3ba7578683bca1e3e2eefc6f3c80cfa8dca819a82bad3bddac73aba50cc8239a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760394663086543106,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-047519,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 132653e3b167f55dee4989b20a4d32f7,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5f94a225-9bf7-4f75-9b3f-614fb70779b8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 22:31:22 test-preload-047519 crio[830]: time="2025-10-13 22:31:22.247646284Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0979002a-82ef-433f-83fe-7adb8758de96 name=/runtime.v1.RuntimeService/Version
	Oct 13 22:31:22 test-preload-047519 crio[830]: time="2025-10-13 22:31:22.248063334Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0979002a-82ef-433f-83fe-7adb8758de96 name=/runtime.v1.RuntimeService/Version
	Oct 13 22:31:22 test-preload-047519 crio[830]: time="2025-10-13 22:31:22.249263572Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8666ea29-8dd4-466b-83f0-e6fe12665013 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 13 22:31:22 test-preload-047519 crio[830]: time="2025-10-13 22:31:22.249961855Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760394682249939854,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8666ea29-8dd4-466b-83f0-e6fe12665013 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 13 22:31:22 test-preload-047519 crio[830]: time="2025-10-13 22:31:22.250628695Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=492a346a-392f-4326-91b1-dc6daaa40f72 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 22:31:22 test-preload-047519 crio[830]: time="2025-10-13 22:31:22.250677832Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=492a346a-392f-4326-91b1-dc6daaa40f72 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 22:31:22 test-preload-047519 crio[830]: time="2025-10-13 22:31:22.250869332Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:51e5f4ae535ef41dca154748d03fdbcf8c0dfbf301b4dd3688fcadeb6ffd3995,PodSandboxId:a7573e3bdb32ba30b0dd88f90b248eae6f6871dfdaf056a89bf11900470256d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760394674205347149,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-l2kbb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3985e1b5-e0db-47f2-9570-72f559d341f4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0907c20a93e52a544ca5a946b6546411fb4e15533c0d4b96282fa40fe8a3e0c6,PodSandboxId:9213d40f92372c8b8c2845c3a90126322e921fb631ce32ec3af9ef58cf302b76,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760394666554811843,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c75c9,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 83b89e0a-db26-44f3-8208-4c14e5f72b6d,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84bcd51c0a911f1fac50ae9be9a524f554a7956abea7b54990463665b506a32b,PodSandboxId:420d27ddd07c46182717105f0a5f686752c3478c300532a376e84bd34b260b94,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760394666548977309,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80
d91d67-e34a-40de-a284-edd177e765e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5c7101af4bbce8d13246d79d0b2dc97cdea40a264dbeb84c416057db39a9dc8,PodSandboxId:c87d66c41347dca9a537d69bdf252369d703c7e6a9d08eb6289a79f200de42fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760394663126670679,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-047519,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c3c2ba7970ac322a35c7b069ed734f4,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b8b97600cbdfb637cb3df1839cc3a9ffd21b08c389b4b1bbec81cc9ed2a6e7,PodSandboxId:3f36dca48bbb9df55118ef187afdea221b393c09ae7ed7f1781327c59f64e65d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760394663130391965,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-047519,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: addc7e91e38bd583bad0956
f87a8bad8,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:224e19f36d1a263038677a361b1e13f610cd9b862c1f642912e4afc6891a8ba7,PodSandboxId:267941ddd5a9c5e754b36c1e4cec24d0cbd9b3c3b055a001abca9f3befde40ad,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760394663093856228,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-047519,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8029c997f854ab3db462ee34868e7532,}
,Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4a52847b403feddf107fd1ef93fd5ff80f3aa4b2b3b8c622771997613870fe5,PodSandboxId:3ba7578683bca1e3e2eefc6f3c80cfa8dca819a82bad3bddac73aba50cc8239a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760394663086543106,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-047519,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 132653e3b167f55dee4989b20a4d32f7,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=492a346a-392f-4326-91b1-dc6daaa40f72 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 22:31:22 test-preload-047519 crio[830]: time="2025-10-13 22:31:22.288817308Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2d506bc7-301d-4abd-8aa1-bddd9015e75e name=/runtime.v1.RuntimeService/Version
	Oct 13 22:31:22 test-preload-047519 crio[830]: time="2025-10-13 22:31:22.289328837Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2d506bc7-301d-4abd-8aa1-bddd9015e75e name=/runtime.v1.RuntimeService/Version
	Oct 13 22:31:22 test-preload-047519 crio[830]: time="2025-10-13 22:31:22.291965153Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=52510517-1e36-477d-baa7-262f2d3e87ad name=/runtime.v1.ImageService/ImageFsInfo
	Oct 13 22:31:22 test-preload-047519 crio[830]: time="2025-10-13 22:31:22.292684593Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760394682292659731,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=52510517-1e36-477d-baa7-262f2d3e87ad name=/runtime.v1.ImageService/ImageFsInfo
	Oct 13 22:31:22 test-preload-047519 crio[830]: time="2025-10-13 22:31:22.293265780Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7b6b202d-88a6-424e-8adf-a57b9e703f34 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 22:31:22 test-preload-047519 crio[830]: time="2025-10-13 22:31:22.293397728Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7b6b202d-88a6-424e-8adf-a57b9e703f34 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 22:31:22 test-preload-047519 crio[830]: time="2025-10-13 22:31:22.293683111Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:51e5f4ae535ef41dca154748d03fdbcf8c0dfbf301b4dd3688fcadeb6ffd3995,PodSandboxId:a7573e3bdb32ba30b0dd88f90b248eae6f6871dfdaf056a89bf11900470256d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760394674205347149,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-l2kbb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3985e1b5-e0db-47f2-9570-72f559d341f4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0907c20a93e52a544ca5a946b6546411fb4e15533c0d4b96282fa40fe8a3e0c6,PodSandboxId:9213d40f92372c8b8c2845c3a90126322e921fb631ce32ec3af9ef58cf302b76,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760394666554811843,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c75c9,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 83b89e0a-db26-44f3-8208-4c14e5f72b6d,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84bcd51c0a911f1fac50ae9be9a524f554a7956abea7b54990463665b506a32b,PodSandboxId:420d27ddd07c46182717105f0a5f686752c3478c300532a376e84bd34b260b94,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760394666548977309,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80
d91d67-e34a-40de-a284-edd177e765e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5c7101af4bbce8d13246d79d0b2dc97cdea40a264dbeb84c416057db39a9dc8,PodSandboxId:c87d66c41347dca9a537d69bdf252369d703c7e6a9d08eb6289a79f200de42fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760394663126670679,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-047519,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c3c2ba7970ac322a35c7b069ed734f4,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b8b97600cbdfb637cb3df1839cc3a9ffd21b08c389b4b1bbec81cc9ed2a6e7,PodSandboxId:3f36dca48bbb9df55118ef187afdea221b393c09ae7ed7f1781327c59f64e65d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760394663130391965,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-047519,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: addc7e91e38bd583bad0956
f87a8bad8,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:224e19f36d1a263038677a361b1e13f610cd9b862c1f642912e4afc6891a8ba7,PodSandboxId:267941ddd5a9c5e754b36c1e4cec24d0cbd9b3c3b055a001abca9f3befde40ad,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760394663093856228,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-047519,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8029c997f854ab3db462ee34868e7532,}
,Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4a52847b403feddf107fd1ef93fd5ff80f3aa4b2b3b8c622771997613870fe5,PodSandboxId:3ba7578683bca1e3e2eefc6f3c80cfa8dca819a82bad3bddac73aba50cc8239a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760394663086543106,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-047519,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 132653e3b167f55dee4989b20a4d32f7,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7b6b202d-88a6-424e-8adf-a57b9e703f34 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 22:31:22 test-preload-047519 crio[830]: time="2025-10-13 22:31:22.313823737Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=8bafb176-b753-441e-8ab9-8047684cdb27 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 13 22:31:22 test-preload-047519 crio[830]: time="2025-10-13 22:31:22.314144521Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:a7573e3bdb32ba30b0dd88f90b248eae6f6871dfdaf056a89bf11900470256d0,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-l2kbb,Uid:3985e1b5-e0db-47f2-9570-72f559d341f4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760394673982121574,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-l2kbb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3985e1b5-e0db-47f2-9570-72f559d341f4,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-13T22:31:06.114804989Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:420d27ddd07c46182717105f0a5f686752c3478c300532a376e84bd34b260b94,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:80d91d67-e34a-40de-a284-edd177e765e1,Namespace:kube-syste
m,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760394666430535307,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80d91d67-e34a-40de-a284-edd177e765e1,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath
\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-10-13T22:31:06.114800502Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9213d40f92372c8b8c2845c3a90126322e921fb631ce32ec3af9ef58cf302b76,Metadata:&PodSandboxMetadata{Name:kube-proxy-c75c9,Uid:83b89e0a-db26-44f3-8208-4c14e5f72b6d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760394666429773134,Labels:map[string]string{controller-revision-hash: 64b9dbc74b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-c75c9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83b89e0a-db26-44f3-8208-4c14e5f72b6d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-13T22:31:06.114811016Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c87d66c41347dca9a537d69bdf252369d703c7e6a9d08eb6289a79f200de42fb,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-047519,Uid:3c3c2ba7970ac322a
35c7b069ed734f4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760394662855202072,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-047519,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c3c2ba7970ac322a35c7b069ed734f4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.205:2379,kubernetes.io/config.hash: 3c3c2ba7970ac322a35c7b069ed734f4,kubernetes.io/config.seen: 2025-10-13T22:31:01.200658118Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3f36dca48bbb9df55118ef187afdea221b393c09ae7ed7f1781327c59f64e65d,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-047519,Uid:addc7e91e38bd583bad0956f87a8bad8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760394662852554724,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube
-controller-manager-test-preload-047519,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: addc7e91e38bd583bad0956f87a8bad8,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: addc7e91e38bd583bad0956f87a8bad8,kubernetes.io/config.seen: 2025-10-13T22:31:01.106896083Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3ba7578683bca1e3e2eefc6f3c80cfa8dca819a82bad3bddac73aba50cc8239a,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-047519,Uid:132653e3b167f55dee4989b20a4d32f7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760394662847735702,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-047519,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 132653e3b167f55dee4989b20a4d32f7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.205:8443,kubernetes.io/c
onfig.hash: 132653e3b167f55dee4989b20a4d32f7,kubernetes.io/config.seen: 2025-10-13T22:31:01.106895033Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:267941ddd5a9c5e754b36c1e4cec24d0cbd9b3c3b055a001abca9f3befde40ad,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-047519,Uid:8029c997f854ab3db462ee34868e7532,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760394662846403458,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-047519,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8029c997f854ab3db462ee34868e7532,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8029c997f854ab3db462ee34868e7532,kubernetes.io/config.seen: 2025-10-13T22:31:01.106890724Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=8bafb176-b753-441e-8ab9-8047684cdb27 name=/runtime.v1.RuntimeService/ListPodSandbox

                                                
                                                
	Oct 13 22:31:22 test-preload-047519 crio[830]: time="2025-10-13 22:31:22.316108052Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=04381b40-17be-4384-ba49-f2fbc0bdc9e4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 22:31:22 test-preload-047519 crio[830]: time="2025-10-13 22:31:22.316178486Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=04381b40-17be-4384-ba49-f2fbc0bdc9e4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 13 22:31:22 test-preload-047519 crio[830]: time="2025-10-13 22:31:22.316415773Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:51e5f4ae535ef41dca154748d03fdbcf8c0dfbf301b4dd3688fcadeb6ffd3995,PodSandboxId:a7573e3bdb32ba30b0dd88f90b248eae6f6871dfdaf056a89bf11900470256d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760394674205347149,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-l2kbb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3985e1b5-e0db-47f2-9570-72f559d341f4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0907c20a93e52a544ca5a946b6546411fb4e15533c0d4b96282fa40fe8a3e0c6,PodSandboxId:9213d40f92372c8b8c2845c3a90126322e921fb631ce32ec3af9ef58cf302b76,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760394666554811843,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c75c9,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 83b89e0a-db26-44f3-8208-4c14e5f72b6d,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84bcd51c0a911f1fac50ae9be9a524f554a7956abea7b54990463665b506a32b,PodSandboxId:420d27ddd07c46182717105f0a5f686752c3478c300532a376e84bd34b260b94,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760394666548977309,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80
d91d67-e34a-40de-a284-edd177e765e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5c7101af4bbce8d13246d79d0b2dc97cdea40a264dbeb84c416057db39a9dc8,PodSandboxId:c87d66c41347dca9a537d69bdf252369d703c7e6a9d08eb6289a79f200de42fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760394663126670679,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-047519,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c3c2ba7970ac322a35c7b069ed734f4,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b8b97600cbdfb637cb3df1839cc3a9ffd21b08c389b4b1bbec81cc9ed2a6e7,PodSandboxId:3f36dca48bbb9df55118ef187afdea221b393c09ae7ed7f1781327c59f64e65d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760394663130391965,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-047519,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: addc7e91e38bd583bad0956
f87a8bad8,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:224e19f36d1a263038677a361b1e13f610cd9b862c1f642912e4afc6891a8ba7,PodSandboxId:267941ddd5a9c5e754b36c1e4cec24d0cbd9b3c3b055a001abca9f3befde40ad,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760394663093856228,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-047519,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8029c997f854ab3db462ee34868e7532,}
,Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4a52847b403feddf107fd1ef93fd5ff80f3aa4b2b3b8c622771997613870fe5,PodSandboxId:3ba7578683bca1e3e2eefc6f3c80cfa8dca819a82bad3bddac73aba50cc8239a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760394663086543106,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-047519,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 132653e3b167f55dee4989b20a4d32f7,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=04381b40-17be-4384-ba49-f2fbc0bdc9e4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	51e5f4ae535ef       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   8 seconds ago       Running             coredns                   1                   a7573e3bdb32b       coredns-668d6bf9bc-l2kbb
	0907c20a93e52       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   15 seconds ago      Running             kube-proxy                1                   9213d40f92372       kube-proxy-c75c9
	84bcd51c0a911       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Running             storage-provisioner       1                   420d27ddd07c4       storage-provisioner
	15b8b97600cbd       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   19 seconds ago      Running             kube-controller-manager   1                   3f36dca48bbb9       kube-controller-manager-test-preload-047519
	e5c7101af4bbc       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   19 seconds ago      Running             etcd                      1                   c87d66c41347d       etcd-test-preload-047519
	224e19f36d1a2       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   19 seconds ago      Running             kube-scheduler            1                   267941ddd5a9c       kube-scheduler-test-preload-047519
	a4a52847b403f       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   19 seconds ago      Running             kube-apiserver            1                   3ba7578683bca       kube-apiserver-test-preload-047519
	
	
	==> coredns [51e5f4ae535ef41dca154748d03fdbcf8c0dfbf301b4dd3688fcadeb6ffd3995] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:48243 - 55854 "HINFO IN 816208299612792061.510456852332847342. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.043407053s
	
	
	==> describe nodes <==
	Name:               test-preload-047519
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-047519
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=18273cc699fc357fbf4f93654efe4966698a9f22
	                    minikube.k8s.io/name=test-preload-047519
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T22_29_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 22:29:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-047519
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 22:31:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 22:31:16 +0000   Mon, 13 Oct 2025 22:29:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 22:31:16 +0000   Mon, 13 Oct 2025 22:29:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 22:31:16 +0000   Mon, 13 Oct 2025 22:29:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 22:31:16 +0000   Mon, 13 Oct 2025 22:31:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.205
	  Hostname:    test-preload-047519
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 1015083cad6d42418bdc92f81771c24d
	  System UUID:                1015083c-ad6d-4241-8bdc-92f81771c24d
	  Boot ID:                    a681373d-6c44-4c85-9125-dfc5a513d973
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-l2kbb                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     95s
	  kube-system                 etcd-test-preload-047519                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         99s
	  kube-system                 kube-apiserver-test-preload-047519             250m (12%)    0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-controller-manager-test-preload-047519    200m (10%)    0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-proxy-c75c9                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 kube-scheduler-test-preload-047519             100m (5%)     0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 93s                  kube-proxy       
	  Normal   Starting                 15s                  kube-proxy       
	  Normal   Starting                 106s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  105s (x8 over 106s)  kubelet          Node test-preload-047519 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    105s (x8 over 106s)  kubelet          Node test-preload-047519 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     105s (x7 over 106s)  kubelet          Node test-preload-047519 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  105s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    99s                  kubelet          Node test-preload-047519 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  99s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  99s                  kubelet          Node test-preload-047519 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     99s                  kubelet          Node test-preload-047519 status is now: NodeHasSufficientPID
	  Normal   Starting                 99s                  kubelet          Starting kubelet.
	  Normal   NodeReady                98s                  kubelet          Node test-preload-047519 status is now: NodeReady
	  Normal   RegisteredNode           96s                  node-controller  Node test-preload-047519 event: Registered Node test-preload-047519 in Controller
	  Normal   Starting                 21s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  21s (x8 over 21s)    kubelet          Node test-preload-047519 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21s (x8 over 21s)    kubelet          Node test-preload-047519 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21s (x7 over 21s)    kubelet          Node test-preload-047519 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  21s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 16s                  kubelet          Node test-preload-047519 has been rebooted, boot id: a681373d-6c44-4c85-9125-dfc5a513d973
	  Normal   RegisteredNode           13s                  node-controller  Node test-preload-047519 event: Registered Node test-preload-047519 in Controller
	
	
	==> dmesg <==
	[Oct13 22:30] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000044] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.004167] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.986777] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.085473] kauditd_printk_skb: 4 callbacks suppressed
	[Oct13 22:31] kauditd_printk_skb: 102 callbacks suppressed
	[  +5.468022] kauditd_printk_skb: 177 callbacks suppressed
	[  +0.000044] kauditd_printk_skb: 128 callbacks suppressed
	[  +0.025971] kauditd_printk_skb: 65 callbacks suppressed
	
	
	==> etcd [e5c7101af4bbce8d13246d79d0b2dc97cdea40a264dbeb84c416057db39a9dc8] <==
	{"level":"info","ts":"2025-10-13T22:31:03.674773Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2e12d85c3b1f69e switched to configuration voters=(12889633661048190622)"}
	{"level":"info","ts":"2025-10-13T22:31:03.680571Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"38e4ac523bec2149","local-member-id":"b2e12d85c3b1f69e","added-peer-id":"b2e12d85c3b1f69e","added-peer-peer-urls":["https://192.168.39.205:2380"]}
	{"level":"info","ts":"2025-10-13T22:31:03.682476Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"38e4ac523bec2149","local-member-id":"b2e12d85c3b1f69e","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-13T22:31:03.682568Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-13T22:31:03.681483Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-13T22:31:03.683630Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"b2e12d85c3b1f69e","initial-advertise-peer-urls":["https://192.168.39.205:2380"],"listen-peer-urls":["https://192.168.39.205:2380"],"advertise-client-urls":["https://192.168.39.205:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.205:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-13T22:31:03.683680Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-13T22:31:03.681505Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.205:2380"}
	{"level":"info","ts":"2025-10-13T22:31:03.683737Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.205:2380"}
	{"level":"info","ts":"2025-10-13T22:31:04.722392Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2e12d85c3b1f69e is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-13T22:31:04.722496Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2e12d85c3b1f69e became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-13T22:31:04.722547Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2e12d85c3b1f69e received MsgPreVoteResp from b2e12d85c3b1f69e at term 2"}
	{"level":"info","ts":"2025-10-13T22:31:04.722587Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2e12d85c3b1f69e became candidate at term 3"}
	{"level":"info","ts":"2025-10-13T22:31:04.722605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2e12d85c3b1f69e received MsgVoteResp from b2e12d85c3b1f69e at term 3"}
	{"level":"info","ts":"2025-10-13T22:31:04.722629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2e12d85c3b1f69e became leader at term 3"}
	{"level":"info","ts":"2025-10-13T22:31:04.722653Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2e12d85c3b1f69e elected leader b2e12d85c3b1f69e at term 3"}
	{"level":"info","ts":"2025-10-13T22:31:04.725392Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"b2e12d85c3b1f69e","local-member-attributes":"{Name:test-preload-047519 ClientURLs:[https://192.168.39.205:2379]}","request-path":"/0/members/b2e12d85c3b1f69e/attributes","cluster-id":"38e4ac523bec2149","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-13T22:31:04.725407Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-13T22:31:04.725426Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-13T22:31:04.726514Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-13T22:31:04.727021Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-13T22:31:04.727333Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-13T22:31:04.727366Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-13T22:31:04.729098Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-13T22:31:04.729667Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.205:2379"}
	
	
	==> kernel <==
	 22:31:22 up 0 min,  0 users,  load average: 0.92, 0.25, 0.08
	Linux test-preload-047519 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [a4a52847b403feddf107fd1ef93fd5ff80f3aa4b2b3b8c622771997613870fe5] <==
	I1013 22:31:06.015528       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1013 22:31:06.015581       1 aggregator.go:171] initial CRD sync complete...
	I1013 22:31:06.015588       1 autoregister_controller.go:144] Starting autoregister controller
	I1013 22:31:06.015593       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 22:31:06.015597       1 cache.go:39] Caches are synced for autoregister controller
	I1013 22:31:06.018985       1 shared_informer.go:320] Caches are synced for configmaps
	I1013 22:31:06.019229       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1013 22:31:06.019411       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1013 22:31:06.019507       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1013 22:31:06.019685       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1013 22:31:06.019714       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1013 22:31:06.019722       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E1013 22:31:06.022000       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1013 22:31:06.025878       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1013 22:31:06.063808       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1013 22:31:06.068608       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 22:31:06.176067       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1013 22:31:06.819071       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 22:31:07.419051       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1013 22:31:07.456070       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1013 22:31:07.484710       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 22:31:07.491764       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 22:31:09.585699       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 22:31:09.635560       1 controller.go:615] quota admission added evaluator for: endpoints
	I1013 22:31:09.685974       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [15b8b97600cbdfb637cb3df1839cc3a9ffd21b08c389b4b1bbec81cc9ed2a6e7] <==
	I1013 22:31:09.236903       1 shared_informer.go:320] Caches are synced for stateful set
	I1013 22:31:09.242091       1 shared_informer.go:320] Caches are synced for job
	I1013 22:31:09.245494       1 shared_informer.go:320] Caches are synced for garbage collector
	I1013 22:31:09.247088       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I1013 22:31:09.253190       1 shared_informer.go:320] Caches are synced for persistent volume
	I1013 22:31:09.256878       1 shared_informer.go:320] Caches are synced for ephemeral
	I1013 22:31:09.256903       1 shared_informer.go:320] Caches are synced for resource quota
	I1013 22:31:09.260368       1 shared_informer.go:320] Caches are synced for endpoint
	I1013 22:31:09.264823       1 shared_informer.go:320] Caches are synced for service account
	I1013 22:31:09.266118       1 shared_informer.go:320] Caches are synced for attach detach
	I1013 22:31:09.267403       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1013 22:31:09.268685       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1013 22:31:09.270398       1 shared_informer.go:320] Caches are synced for cronjob
	I1013 22:31:09.275570       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1013 22:31:09.284658       1 shared_informer.go:320] Caches are synced for garbage collector
	I1013 22:31:09.284849       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 22:31:09.284871       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 22:31:09.697863       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="430.261326ms"
	I1013 22:31:09.698146       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="127.392µs"
	I1013 22:31:14.305915       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="585.569µs"
	I1013 22:31:15.318962       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="12.419234ms"
	I1013 22:31:15.319497       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="417.703µs"
	I1013 22:31:16.286605       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-047519"
	I1013 22:31:16.303250       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-047519"
	I1013 22:31:19.192497       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [0907c20a93e52a544ca5a946b6546411fb4e15533c0d4b96282fa40fe8a3e0c6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1013 22:31:06.752127       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1013 22:31:06.762104       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.205"]
	E1013 22:31:06.762190       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 22:31:06.801939       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1013 22:31:06.801995       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1013 22:31:06.802017       1 server_linux.go:170] "Using iptables Proxier"
	I1013 22:31:06.804988       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 22:31:06.805275       1 server.go:497] "Version info" version="v1.32.0"
	I1013 22:31:06.805352       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:31:06.807180       1 config.go:199] "Starting service config controller"
	I1013 22:31:06.807225       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1013 22:31:06.807251       1 config.go:105] "Starting endpoint slice config controller"
	I1013 22:31:06.807255       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1013 22:31:06.807823       1 config.go:329] "Starting node config controller"
	I1013 22:31:06.807854       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1013 22:31:06.907465       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1013 22:31:06.907549       1 shared_informer.go:320] Caches are synced for service config
	I1013 22:31:06.907933       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [224e19f36d1a263038677a361b1e13f610cd9b862c1f642912e4afc6891a8ba7] <==
	I1013 22:31:04.393189       1 serving.go:386] Generated self-signed cert in-memory
	W1013 22:31:05.946413       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1013 22:31:05.946454       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1013 22:31:05.946467       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1013 22:31:05.946476       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1013 22:31:05.986207       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1013 22:31:05.986329       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:31:05.996903       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:31:05.996934       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1013 22:31:05.997039       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1013 22:31:05.997114       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 22:31:06.098121       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 13 22:31:06 test-preload-047519 kubelet[1158]: I1013 22:31:06.133166    1158 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Oct 13 22:31:06 test-preload-047519 kubelet[1158]: I1013 22:31:06.165687    1158 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83b89e0a-db26-44f3-8208-4c14e5f72b6d-xtables-lock\") pod \"kube-proxy-c75c9\" (UID: \"83b89e0a-db26-44f3-8208-4c14e5f72b6d\") " pod="kube-system/kube-proxy-c75c9"
	Oct 13 22:31:06 test-preload-047519 kubelet[1158]: I1013 22:31:06.165738    1158 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/80d91d67-e34a-40de-a284-edd177e765e1-tmp\") pod \"storage-provisioner\" (UID: \"80d91d67-e34a-40de-a284-edd177e765e1\") " pod="kube-system/storage-provisioner"
	Oct 13 22:31:06 test-preload-047519 kubelet[1158]: I1013 22:31:06.165767    1158 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83b89e0a-db26-44f3-8208-4c14e5f72b6d-lib-modules\") pod \"kube-proxy-c75c9\" (UID: \"83b89e0a-db26-44f3-8208-4c14e5f72b6d\") " pod="kube-system/kube-proxy-c75c9"
	Oct 13 22:31:06 test-preload-047519 kubelet[1158]: E1013 22:31:06.166039    1158 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 13 22:31:06 test-preload-047519 kubelet[1158]: E1013 22:31:06.166110    1158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3985e1b5-e0db-47f2-9570-72f559d341f4-config-volume podName:3985e1b5-e0db-47f2-9570-72f559d341f4 nodeName:}" failed. No retries permitted until 2025-10-13 22:31:06.666088627 +0000 UTC m=+5.652315923 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3985e1b5-e0db-47f2-9570-72f559d341f4-config-volume") pod "coredns-668d6bf9bc-l2kbb" (UID: "3985e1b5-e0db-47f2-9570-72f559d341f4") : object "kube-system"/"coredns" not registered
	Oct 13 22:31:06 test-preload-047519 kubelet[1158]: E1013 22:31:06.186087    1158 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Oct 13 22:31:06 test-preload-047519 kubelet[1158]: I1013 22:31:06.239848    1158 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-test-preload-047519"
	Oct 13 22:31:06 test-preload-047519 kubelet[1158]: I1013 22:31:06.240259    1158 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-047519"
	Oct 13 22:31:06 test-preload-047519 kubelet[1158]: I1013 22:31:06.240470    1158 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-047519"
	Oct 13 22:31:06 test-preload-047519 kubelet[1158]: E1013 22:31:06.259404    1158 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-047519\" already exists" pod="kube-system/kube-apiserver-test-preload-047519"
	Oct 13 22:31:06 test-preload-047519 kubelet[1158]: E1013 22:31:06.259867    1158 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-047519\" already exists" pod="kube-system/kube-scheduler-test-preload-047519"
	Oct 13 22:31:06 test-preload-047519 kubelet[1158]: E1013 22:31:06.259929    1158 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-047519\" already exists" pod="kube-system/etcd-test-preload-047519"
	Oct 13 22:31:06 test-preload-047519 kubelet[1158]: E1013 22:31:06.668570    1158 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 13 22:31:06 test-preload-047519 kubelet[1158]: E1013 22:31:06.668641    1158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3985e1b5-e0db-47f2-9570-72f559d341f4-config-volume podName:3985e1b5-e0db-47f2-9570-72f559d341f4 nodeName:}" failed. No retries permitted until 2025-10-13 22:31:07.668628173 +0000 UTC m=+6.654855468 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3985e1b5-e0db-47f2-9570-72f559d341f4-config-volume") pod "coredns-668d6bf9bc-l2kbb" (UID: "3985e1b5-e0db-47f2-9570-72f559d341f4") : object "kube-system"/"coredns" not registered
	Oct 13 22:31:07 test-preload-047519 kubelet[1158]: E1013 22:31:07.676084    1158 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 13 22:31:07 test-preload-047519 kubelet[1158]: E1013 22:31:07.676164    1158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3985e1b5-e0db-47f2-9570-72f559d341f4-config-volume podName:3985e1b5-e0db-47f2-9570-72f559d341f4 nodeName:}" failed. No retries permitted until 2025-10-13 22:31:09.676151077 +0000 UTC m=+8.662378377 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3985e1b5-e0db-47f2-9570-72f559d341f4-config-volume") pod "coredns-668d6bf9bc-l2kbb" (UID: "3985e1b5-e0db-47f2-9570-72f559d341f4") : object "kube-system"/"coredns" not registered
	Oct 13 22:31:08 test-preload-047519 kubelet[1158]: E1013 22:31:08.171636    1158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-l2kbb" podUID="3985e1b5-e0db-47f2-9570-72f559d341f4"
	Oct 13 22:31:09 test-preload-047519 kubelet[1158]: E1013 22:31:09.694151    1158 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 13 22:31:09 test-preload-047519 kubelet[1158]: E1013 22:31:09.694220    1158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3985e1b5-e0db-47f2-9570-72f559d341f4-config-volume podName:3985e1b5-e0db-47f2-9570-72f559d341f4 nodeName:}" failed. No retries permitted until 2025-10-13 22:31:13.694203834 +0000 UTC m=+12.680431116 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3985e1b5-e0db-47f2-9570-72f559d341f4-config-volume") pod "coredns-668d6bf9bc-l2kbb" (UID: "3985e1b5-e0db-47f2-9570-72f559d341f4") : object "kube-system"/"coredns" not registered
	Oct 13 22:31:10 test-preload-047519 kubelet[1158]: E1013 22:31:10.171576    1158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-l2kbb" podUID="3985e1b5-e0db-47f2-9570-72f559d341f4"
	Oct 13 22:31:11 test-preload-047519 kubelet[1158]: E1013 22:31:11.188845    1158 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760394671188647992,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 13 22:31:11 test-preload-047519 kubelet[1158]: E1013 22:31:11.188866    1158 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760394671188647992,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 13 22:31:21 test-preload-047519 kubelet[1158]: E1013 22:31:21.191549    1158 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760394681189633892,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 13 22:31:21 test-preload-047519 kubelet[1158]: E1013 22:31:21.191598    1158 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760394681189633892,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [84bcd51c0a911f1fac50ae9be9a524f554a7956abea7b54990463665b506a32b] <==
	I1013 22:31:06.659898       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-047519 -n test-preload-047519
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-047519 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-047519" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-047519
--- FAIL: TestPreload (153.96s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (66.68s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-056726 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-056726 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m1.92090321s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-056726] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-15625/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-15625/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-056726" primary control-plane node in "pause-056726" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-056726" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 22:38:23.963096   64307 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:38:23.963240   64307 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:38:23.963252   64307 out.go:374] Setting ErrFile to fd 2...
	I1013 22:38:23.963265   64307 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:38:23.963571   64307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-15625/.minikube/bin
	I1013 22:38:23.964118   64307 out.go:368] Setting JSON to false
	I1013 22:38:23.965137   64307 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":8452,"bootTime":1760386652,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 22:38:23.965270   64307 start.go:141] virtualization: kvm guest
	I1013 22:38:23.967429   64307 out.go:179] * [pause-056726] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 22:38:23.968864   64307 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:38:23.968870   64307 notify.go:220] Checking for updates...
	I1013 22:38:23.971444   64307 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:38:23.972764   64307 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-15625/kubeconfig
	I1013 22:38:23.974114   64307 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-15625/.minikube
	I1013 22:38:23.975873   64307 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 22:38:23.977199   64307 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:38:23.979298   64307 config.go:182] Loaded profile config "pause-056726": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:38:23.979720   64307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 22:38:23.979784   64307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 22:38:23.996033   64307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43673
	I1013 22:38:23.996911   64307 main.go:141] libmachine: () Calling .GetVersion
	I1013 22:38:23.997805   64307 main.go:141] libmachine: Using API Version  1
	I1013 22:38:23.997848   64307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 22:38:23.998643   64307 main.go:141] libmachine: () Calling .GetMachineName
	I1013 22:38:23.998888   64307 main.go:141] libmachine: (pause-056726) Calling .DriverName
	I1013 22:38:23.999275   64307 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:38:23.999762   64307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 22:38:23.999835   64307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 22:38:24.014598   64307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35523
	I1013 22:38:24.015330   64307 main.go:141] libmachine: () Calling .GetVersion
	I1013 22:38:24.015919   64307 main.go:141] libmachine: Using API Version  1
	I1013 22:38:24.015951   64307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 22:38:24.016435   64307 main.go:141] libmachine: () Calling .GetMachineName
	I1013 22:38:24.016691   64307 main.go:141] libmachine: (pause-056726) Calling .DriverName
	I1013 22:38:24.058076   64307 out.go:179] * Using the kvm2 driver based on existing profile
	I1013 22:38:24.059319   64307 start.go:305] selected driver: kvm2
	I1013 22:38:24.059335   64307 start.go:925] validating driver "kvm2" against &{Name:pause-056726 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.1 ClusterName:pause-056726 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:38:24.059465   64307 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:38:24.059792   64307 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:38:24.059875   64307 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-15625/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 22:38:24.074863   64307 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 22:38:24.074904   64307 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-15625/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 22:38:24.091482   64307 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 22:38:24.092630   64307 cni.go:84] Creating CNI manager for ""
	I1013 22:38:24.092697   64307 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1013 22:38:24.092763   64307 start.go:349] cluster config:
	{Name:pause-056726 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-056726 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:38:24.092933   64307 iso.go:125] acquiring lock: {Name:mkb744e09089d0ab8a5ae3294003cf719d380bf8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:38:24.095353   64307 out.go:179] * Starting "pause-056726" primary control-plane node in "pause-056726" cluster
	I1013 22:38:24.096660   64307 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:38:24.096704   64307 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-15625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1013 22:38:24.096715   64307 cache.go:58] Caching tarball of preloaded images
	I1013 22:38:24.096800   64307 preload.go:233] Found /home/jenkins/minikube-integration/21724-15625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1013 22:38:24.096813   64307 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 22:38:24.096925   64307 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/pause-056726/config.json ...
	I1013 22:38:24.097144   64307 start.go:360] acquireMachinesLock for pause-056726: {Name:mk81e7d45b6c30d879e4077cd05b64f26ced767a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1013 22:38:41.032403   64307 start.go:364] duration metric: took 16.935216989s to acquireMachinesLock for "pause-056726"
	I1013 22:38:41.032454   64307 start.go:96] Skipping create...Using existing machine configuration
	I1013 22:38:41.032465   64307 fix.go:54] fixHost starting: 
	I1013 22:38:41.032883   64307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 22:38:41.032948   64307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 22:38:41.051008   64307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39551
	I1013 22:38:41.051478   64307 main.go:141] libmachine: () Calling .GetVersion
	I1013 22:38:41.052023   64307 main.go:141] libmachine: Using API Version  1
	I1013 22:38:41.052049   64307 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 22:38:41.052480   64307 main.go:141] libmachine: () Calling .GetMachineName
	I1013 22:38:41.052692   64307 main.go:141] libmachine: (pause-056726) Calling .DriverName
	I1013 22:38:41.052869   64307 main.go:141] libmachine: (pause-056726) Calling .GetState
	I1013 22:38:41.055703   64307 fix.go:112] recreateIfNeeded on pause-056726: state=Running err=<nil>
	W1013 22:38:41.055726   64307 fix.go:138] unexpected machine state, will restart: <nil>
	I1013 22:38:41.057601   64307 out.go:252] * Updating the running kvm2 "pause-056726" VM ...
	I1013 22:38:41.057638   64307 machine.go:93] provisionDockerMachine start ...
	I1013 22:38:41.057654   64307 main.go:141] libmachine: (pause-056726) Calling .DriverName
	I1013 22:38:41.057844   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHHostname
	I1013 22:38:41.061178   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:41.061536   64307 main.go:141] libmachine: (pause-056726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:d4:33", ip: ""} in network mk-pause-056726: {Iface:virbr2 ExpiryTime:2025-10-13 23:37:09 +0000 UTC Type:0 Mac:52:54:00:7b:d4:33 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-056726 Clientid:01:52:54:00:7b:d4:33}
	I1013 22:38:41.061574   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined IP address 192.168.50.114 and MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:41.061741   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHPort
	I1013 22:38:41.061937   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:41.062110   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:41.062280   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHUsername
	I1013 22:38:41.062477   64307 main.go:141] libmachine: Using SSH client type: native
	I1013 22:38:41.062726   64307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I1013 22:38:41.062742   64307 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 22:38:41.186066   64307 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-056726
	
	I1013 22:38:41.186102   64307 main.go:141] libmachine: (pause-056726) Calling .GetMachineName
	I1013 22:38:41.186437   64307 buildroot.go:166] provisioning hostname "pause-056726"
	I1013 22:38:41.186470   64307 main.go:141] libmachine: (pause-056726) Calling .GetMachineName
	I1013 22:38:41.186698   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHHostname
	I1013 22:38:41.190353   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:41.190799   64307 main.go:141] libmachine: (pause-056726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:d4:33", ip: ""} in network mk-pause-056726: {Iface:virbr2 ExpiryTime:2025-10-13 23:37:09 +0000 UTC Type:0 Mac:52:54:00:7b:d4:33 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-056726 Clientid:01:52:54:00:7b:d4:33}
	I1013 22:38:41.190830   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined IP address 192.168.50.114 and MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:41.191002   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHPort
	I1013 22:38:41.191218   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:41.191395   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:41.191546   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHUsername
	I1013 22:38:41.191851   64307 main.go:141] libmachine: Using SSH client type: native
	I1013 22:38:41.192120   64307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I1013 22:38:41.192142   64307 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-056726 && echo "pause-056726" | sudo tee /etc/hostname
	I1013 22:38:41.336470   64307 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-056726
	
	I1013 22:38:41.336503   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHHostname
	I1013 22:38:41.340097   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:41.340706   64307 main.go:141] libmachine: (pause-056726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:d4:33", ip: ""} in network mk-pause-056726: {Iface:virbr2 ExpiryTime:2025-10-13 23:37:09 +0000 UTC Type:0 Mac:52:54:00:7b:d4:33 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-056726 Clientid:01:52:54:00:7b:d4:33}
	I1013 22:38:41.340753   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined IP address 192.168.50.114 and MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:41.341057   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHPort
	I1013 22:38:41.341297   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:41.341500   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:41.341718   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHUsername
	I1013 22:38:41.341910   64307 main.go:141] libmachine: Using SSH client type: native
	I1013 22:38:41.342221   64307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I1013 22:38:41.342262   64307 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-056726' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-056726/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-056726' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 22:38:41.465951   64307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:38:41.465998   64307 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21724-15625/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-15625/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-15625/.minikube}
	I1013 22:38:41.466022   64307 buildroot.go:174] setting up certificates
	I1013 22:38:41.466039   64307 provision.go:84] configureAuth start
	I1013 22:38:41.466058   64307 main.go:141] libmachine: (pause-056726) Calling .GetMachineName
	I1013 22:38:41.466350   64307 main.go:141] libmachine: (pause-056726) Calling .GetIP
	I1013 22:38:41.470586   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:41.471088   64307 main.go:141] libmachine: (pause-056726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:d4:33", ip: ""} in network mk-pause-056726: {Iface:virbr2 ExpiryTime:2025-10-13 23:37:09 +0000 UTC Type:0 Mac:52:54:00:7b:d4:33 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-056726 Clientid:01:52:54:00:7b:d4:33}
	I1013 22:38:41.471129   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined IP address 192.168.50.114 and MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:41.471590   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHHostname
	I1013 22:38:41.475221   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:41.475850   64307 main.go:141] libmachine: (pause-056726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:d4:33", ip: ""} in network mk-pause-056726: {Iface:virbr2 ExpiryTime:2025-10-13 23:37:09 +0000 UTC Type:0 Mac:52:54:00:7b:d4:33 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-056726 Clientid:01:52:54:00:7b:d4:33}
	I1013 22:38:41.475880   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined IP address 192.168.50.114 and MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:41.476182   64307 provision.go:143] copyHostCerts
	I1013 22:38:41.476251   64307 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-15625/.minikube/ca.pem, removing ...
	I1013 22:38:41.476272   64307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-15625/.minikube/ca.pem
	I1013 22:38:41.476339   64307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-15625/.minikube/ca.pem (1078 bytes)
	I1013 22:38:41.476489   64307 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-15625/.minikube/cert.pem, removing ...
	I1013 22:38:41.476505   64307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-15625/.minikube/cert.pem
	I1013 22:38:41.476543   64307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-15625/.minikube/cert.pem (1123 bytes)
	I1013 22:38:41.476636   64307 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-15625/.minikube/key.pem, removing ...
	I1013 22:38:41.476649   64307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-15625/.minikube/key.pem
	I1013 22:38:41.476681   64307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-15625/.minikube/key.pem (1675 bytes)
	I1013 22:38:41.476763   64307 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-15625/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca-key.pem org=jenkins.pause-056726 san=[127.0.0.1 192.168.50.114 localhost minikube pause-056726]
	I1013 22:38:41.976552   64307 provision.go:177] copyRemoteCerts
	I1013 22:38:41.976618   64307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:38:41.976659   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHHostname
	I1013 22:38:41.980446   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:41.980969   64307 main.go:141] libmachine: (pause-056726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:d4:33", ip: ""} in network mk-pause-056726: {Iface:virbr2 ExpiryTime:2025-10-13 23:37:09 +0000 UTC Type:0 Mac:52:54:00:7b:d4:33 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-056726 Clientid:01:52:54:00:7b:d4:33}
	I1013 22:38:41.980999   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined IP address 192.168.50.114 and MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:41.981297   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHPort
	I1013 22:38:41.981600   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:41.981786   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHUsername
	I1013 22:38:41.981995   64307 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/pause-056726/id_rsa Username:docker}
	I1013 22:38:42.080693   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1013 22:38:42.128691   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1013 22:38:42.168107   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 22:38:42.205857   64307 provision.go:87] duration metric: took 739.797808ms to configureAuth
	I1013 22:38:42.205917   64307 buildroot.go:189] setting minikube options for container-runtime
	I1013 22:38:42.206211   64307 config.go:182] Loaded profile config "pause-056726": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:38:42.206320   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHHostname
	I1013 22:38:42.213002   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:42.213603   64307 main.go:141] libmachine: (pause-056726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:d4:33", ip: ""} in network mk-pause-056726: {Iface:virbr2 ExpiryTime:2025-10-13 23:37:09 +0000 UTC Type:0 Mac:52:54:00:7b:d4:33 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-056726 Clientid:01:52:54:00:7b:d4:33}
	I1013 22:38:42.213636   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined IP address 192.168.50.114 and MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:42.213913   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHPort
	I1013 22:38:42.214121   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:42.214296   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:42.214418   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHUsername
	I1013 22:38:42.214664   64307 main.go:141] libmachine: Using SSH client type: native
	I1013 22:38:42.214890   64307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I1013 22:38:42.214910   64307 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:38:47.826384   64307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:38:47.826408   64307 machine.go:96] duration metric: took 6.768762066s to provisionDockerMachine
	I1013 22:38:47.826422   64307 start.go:293] postStartSetup for "pause-056726" (driver="kvm2")
	I1013 22:38:47.826434   64307 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:38:47.826454   64307 main.go:141] libmachine: (pause-056726) Calling .DriverName
	I1013 22:38:47.826830   64307 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:38:47.826862   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHHostname
	I1013 22:38:47.830452   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:47.830934   64307 main.go:141] libmachine: (pause-056726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:d4:33", ip: ""} in network mk-pause-056726: {Iface:virbr2 ExpiryTime:2025-10-13 23:37:09 +0000 UTC Type:0 Mac:52:54:00:7b:d4:33 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-056726 Clientid:01:52:54:00:7b:d4:33}
	I1013 22:38:47.830965   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined IP address 192.168.50.114 and MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:47.831171   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHPort
	I1013 22:38:47.831353   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:47.831505   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHUsername
	I1013 22:38:47.831701   64307 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/pause-056726/id_rsa Username:docker}
	I1013 22:38:47.923525   64307 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 22:38:47.929446   64307 info.go:137] Remote host: Buildroot 2025.02
	I1013 22:38:47.929471   64307 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-15625/.minikube/addons for local assets ...
	I1013 22:38:47.929552   64307 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-15625/.minikube/files for local assets ...
	I1013 22:38:47.929654   64307 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-15625/.minikube/files/etc/ssl/certs/199472.pem -> 199472.pem in /etc/ssl/certs
	I1013 22:38:47.929798   64307 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 22:38:47.945141   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/files/etc/ssl/certs/199472.pem --> /etc/ssl/certs/199472.pem (1708 bytes)
	I1013 22:38:47.982748   64307 start.go:296] duration metric: took 156.310071ms for postStartSetup
	I1013 22:38:47.982792   64307 fix.go:56] duration metric: took 6.95032763s for fixHost
	I1013 22:38:47.982816   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHHostname
	I1013 22:38:47.986308   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:47.986786   64307 main.go:141] libmachine: (pause-056726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:d4:33", ip: ""} in network mk-pause-056726: {Iface:virbr2 ExpiryTime:2025-10-13 23:37:09 +0000 UTC Type:0 Mac:52:54:00:7b:d4:33 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-056726 Clientid:01:52:54:00:7b:d4:33}
	I1013 22:38:47.986817   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined IP address 192.168.50.114 and MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:47.987066   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHPort
	I1013 22:38:47.987297   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:47.987484   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:47.987666   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHUsername
	I1013 22:38:47.987856   64307 main.go:141] libmachine: Using SSH client type: native
	I1013 22:38:47.988133   64307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I1013 22:38:47.988149   64307 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1013 22:38:48.109483   64307 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760395128.101107801
	
	I1013 22:38:48.109504   64307 fix.go:216] guest clock: 1760395128.101107801
	I1013 22:38:48.109512   64307 fix.go:229] Guest: 2025-10-13 22:38:48.101107801 +0000 UTC Remote: 2025-10-13 22:38:47.98279722 +0000 UTC m=+24.069035821 (delta=118.310581ms)
	I1013 22:38:48.109537   64307 fix.go:200] guest clock delta is within tolerance: 118.310581ms
	I1013 22:38:48.109544   64307 start.go:83] releasing machines lock for "pause-056726", held for 7.07711387s
	I1013 22:38:48.109575   64307 main.go:141] libmachine: (pause-056726) Calling .DriverName
	I1013 22:38:48.109858   64307 main.go:141] libmachine: (pause-056726) Calling .GetIP
	I1013 22:38:48.113678   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:48.114210   64307 main.go:141] libmachine: (pause-056726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:d4:33", ip: ""} in network mk-pause-056726: {Iface:virbr2 ExpiryTime:2025-10-13 23:37:09 +0000 UTC Type:0 Mac:52:54:00:7b:d4:33 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-056726 Clientid:01:52:54:00:7b:d4:33}
	I1013 22:38:48.114245   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined IP address 192.168.50.114 and MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:48.114431   64307 main.go:141] libmachine: (pause-056726) Calling .DriverName
	I1013 22:38:48.115054   64307 main.go:141] libmachine: (pause-056726) Calling .DriverName
	I1013 22:38:48.115281   64307 main.go:141] libmachine: (pause-056726) Calling .DriverName
	I1013 22:38:48.115402   64307 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 22:38:48.115455   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHHostname
	I1013 22:38:48.115585   64307 ssh_runner.go:195] Run: cat /version.json
	I1013 22:38:48.115610   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHHostname
	I1013 22:38:48.120256   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:48.120941   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:48.121395   64307 main.go:141] libmachine: (pause-056726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:d4:33", ip: ""} in network mk-pause-056726: {Iface:virbr2 ExpiryTime:2025-10-13 23:37:09 +0000 UTC Type:0 Mac:52:54:00:7b:d4:33 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-056726 Clientid:01:52:54:00:7b:d4:33}
	I1013 22:38:48.121420   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined IP address 192.168.50.114 and MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:48.121684   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHPort
	I1013 22:38:48.121714   64307 main.go:141] libmachine: (pause-056726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:d4:33", ip: ""} in network mk-pause-056726: {Iface:virbr2 ExpiryTime:2025-10-13 23:37:09 +0000 UTC Type:0 Mac:52:54:00:7b:d4:33 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-056726 Clientid:01:52:54:00:7b:d4:33}
	I1013 22:38:48.121840   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined IP address 192.168.50.114 and MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:48.122058   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHPort
	I1013 22:38:48.122212   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:48.122373   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:48.122596   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHUsername
	I1013 22:38:48.122603   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHUsername
	I1013 22:38:48.122825   64307 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/pause-056726/id_rsa Username:docker}
	I1013 22:38:48.123254   64307 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/pause-056726/id_rsa Username:docker}
	I1013 22:38:48.209685   64307 ssh_runner.go:195] Run: systemctl --version
	I1013 22:38:48.237877   64307 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 22:38:48.485856   64307 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 22:38:48.496627   64307 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 22:38:48.496704   64307 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 22:38:48.510288   64307 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1013 22:38:48.510318   64307 start.go:495] detecting cgroup driver to use...
	I1013 22:38:48.510400   64307 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 22:38:48.539084   64307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 22:38:48.566554   64307 docker.go:218] disabling cri-docker service (if available) ...
	I1013 22:38:48.566613   64307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 22:38:48.596210   64307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 22:38:48.620854   64307 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 22:38:48.872388   64307 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:38:49.088960   64307 docker.go:234] disabling docker service ...
	I1013 22:38:49.089059   64307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:38:49.122978   64307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:38:49.142380   64307 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:38:49.345900   64307 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:38:49.582902   64307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:38:49.603147   64307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:38:49.634419   64307 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 22:38:49.634491   64307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:38:49.649208   64307 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 22:38:49.649288   64307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:38:49.682378   64307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:38:49.704376   64307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:38:49.758297   64307 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:38:49.787167   64307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:38:49.820057   64307 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:38:49.843948   64307 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:38:49.878037   64307 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:38:49.905531   64307 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:38:49.925073   64307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:38:50.298279   64307 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:38:50.864747   64307 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:38:50.864846   64307 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:38:50.873254   64307 start.go:563] Will wait 60s for crictl version
	I1013 22:38:50.873323   64307 ssh_runner.go:195] Run: which crictl
	I1013 22:38:50.880216   64307 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1013 22:38:50.931241   64307 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1013 22:38:50.931319   64307 ssh_runner.go:195] Run: crio --version
	I1013 22:38:50.968087   64307 ssh_runner.go:195] Run: crio --version
	I1013 22:38:51.010888   64307 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1013 22:38:51.012295   64307 main.go:141] libmachine: (pause-056726) Calling .GetIP
	I1013 22:38:51.016095   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:51.016687   64307 main.go:141] libmachine: (pause-056726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:d4:33", ip: ""} in network mk-pause-056726: {Iface:virbr2 ExpiryTime:2025-10-13 23:37:09 +0000 UTC Type:0 Mac:52:54:00:7b:d4:33 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-056726 Clientid:01:52:54:00:7b:d4:33}
	I1013 22:38:51.016718   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined IP address 192.168.50.114 and MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:51.017048   64307 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1013 22:38:51.023670   64307 kubeadm.go:883] updating cluster {Name:pause-056726 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-056726 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:38:51.023832   64307 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:38:51.023891   64307 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:38:51.081614   64307 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:38:51.081644   64307 crio.go:433] Images already preloaded, skipping extraction
	I1013 22:38:51.081718   64307 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:38:51.130060   64307 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:38:51.130087   64307 cache_images.go:85] Images are preloaded, skipping loading
	I1013 22:38:51.130095   64307 kubeadm.go:934] updating node { 192.168.50.114 8443 v1.34.1 crio true true} ...
	I1013 22:38:51.130248   64307 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-056726 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-056726 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 22:38:51.130346   64307 ssh_runner.go:195] Run: crio config
	I1013 22:38:51.201189   64307 cni.go:84] Creating CNI manager for ""
	I1013 22:38:51.201222   64307 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1013 22:38:51.201242   64307 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 22:38:51.201267   64307 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.114 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-056726 NodeName:pause-056726 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:38:51.201429   64307 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-056726"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.114"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.114"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:38:51.201498   64307 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 22:38:51.217808   64307 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 22:38:51.217897   64307 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:38:51.233569   64307 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1013 22:38:51.261591   64307 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:38:51.287766   64307 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1013 22:38:51.316017   64307 ssh_runner.go:195] Run: grep 192.168.50.114	control-plane.minikube.internal$ /etc/hosts
	I1013 22:38:51.321143   64307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:38:51.572704   64307 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:38:51.643105   64307 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/pause-056726 for IP: 192.168.50.114
	I1013 22:38:51.643127   64307 certs.go:195] generating shared ca certs ...
	I1013 22:38:51.643172   64307 certs.go:227] acquiring lock for ca certs: {Name:mk571ab777fecb8fbabce6e1d2676cf4c099bd41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:38:51.643346   64307 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-15625/.minikube/ca.key
	I1013 22:38:51.643408   64307 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-15625/.minikube/proxy-client-ca.key
	I1013 22:38:51.643424   64307 certs.go:257] generating profile certs ...
	I1013 22:38:51.643550   64307 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/pause-056726/client.key
	I1013 22:38:51.643650   64307 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/pause-056726/apiserver.key.470e9060
	I1013 22:38:51.643709   64307 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/pause-056726/proxy-client.key
	I1013 22:38:51.643862   64307 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/19947.pem (1338 bytes)
	W1013 22:38:51.643922   64307 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-15625/.minikube/certs/19947_empty.pem, impossibly tiny 0 bytes
	I1013 22:38:51.643944   64307 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 22:38:51.643989   64307 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem (1078 bytes)
	I1013 22:38:51.644039   64307 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:38:51.644088   64307 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/key.pem (1675 bytes)
	I1013 22:38:51.644185   64307 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/files/etc/ssl/certs/199472.pem (1708 bytes)
	I1013 22:38:51.645127   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:38:51.767866   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 22:38:51.872623   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:38:51.962000   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 22:38:52.020524   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/pause-056726/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1013 22:38:52.106256   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/pause-056726/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1013 22:38:52.186178   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/pause-056726/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:38:52.253585   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/pause-056726/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1013 22:38:52.358197   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/files/etc/ssl/certs/199472.pem --> /usr/share/ca-certificates/199472.pem (1708 bytes)
	I1013 22:38:52.424688   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:38:52.471765   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/certs/19947.pem --> /usr/share/ca-certificates/19947.pem (1338 bytes)
	I1013 22:38:52.527060   64307 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:38:52.595263   64307 ssh_runner.go:195] Run: openssl version
	I1013 22:38:52.603719   64307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/199472.pem && ln -fs /usr/share/ca-certificates/199472.pem /etc/ssl/certs/199472.pem"
	I1013 22:38:52.624291   64307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/199472.pem
	I1013 22:38:52.630957   64307 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:27 /usr/share/ca-certificates/199472.pem
	I1013 22:38:52.631025   64307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/199472.pem
	I1013 22:38:52.639973   64307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/199472.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 22:38:52.654151   64307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:38:52.671610   64307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:38:52.678096   64307 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 21:18 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:38:52.678190   64307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:38:52.686913   64307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:38:52.703128   64307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19947.pem && ln -fs /usr/share/ca-certificates/19947.pem /etc/ssl/certs/19947.pem"
	I1013 22:38:52.733509   64307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19947.pem
	I1013 22:38:52.747790   64307 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:27 /usr/share/ca-certificates/19947.pem
	I1013 22:38:52.747855   64307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19947.pem
	I1013 22:38:52.762122   64307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19947.pem /etc/ssl/certs/51391683.0"
	I1013 22:38:52.795639   64307 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:38:52.802035   64307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1013 22:38:52.810138   64307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1013 22:38:52.818740   64307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1013 22:38:52.826691   64307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1013 22:38:52.835090   64307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1013 22:38:52.843652   64307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1013 22:38:52.852783   64307 kubeadm.go:400] StartCluster: {Name:pause-056726 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-056726 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:38:52.852934   64307 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:38:52.852998   64307 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:38:52.902942   64307 cri.go:89] found id: "1976935c4f01c7b9a13df7bb5d1d9ef512d248f7c51f7a17a8b7f01f5550a483"
	I1013 22:38:52.902969   64307 cri.go:89] found id: "7c29c423def7a994b132040a9614198e6a709fb14a87b4aacd14e813aa559ac8"
	I1013 22:38:52.902975   64307 cri.go:89] found id: "2da2442d80a23198b8938c1f85a9a443748c2b569431aed123dd840114bc725e"
	I1013 22:38:52.902980   64307 cri.go:89] found id: "46e601cd1b2a167997d7436a8e04ac20c370b61038e9b38abdbcafb3714df69a"
	I1013 22:38:52.902984   64307 cri.go:89] found id: "6eecfceb7178ca1572d2db0b0e0d133f998fef7c72f5be015811563a9c3b9ab7"
	I1013 22:38:52.902989   64307 cri.go:89] found id: "346a3bf45b515168f44c5eb17452a5999dc929d16bb03bfcb6b992a05d0e5953"
	I1013 22:38:52.902992   64307 cri.go:89] found id: "8341b5658a3dbfd304eee1bfcc1db60614f0dde6f2f0db558b10851d5bea38ab"
	I1013 22:38:52.902996   64307 cri.go:89] found id: "cc85e6bee7a15884026948a07a78f5832470b4fdf1803cf08249b1b207b9a86c"
	I1013 22:38:52.902999   64307 cri.go:89] found id: ""
	I1013 22:38:52.903071   64307 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-056726 -n pause-056726
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-056726 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-056726 logs -n 25: (1.693373019s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ start   │ -p running-upgrade-410631 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                  │ running-upgrade-410631    │ jenkins │ v1.37.0 │ 13 Oct 25 22:35 UTC │ 13 Oct 25 22:36 UTC │
	│ start   │ -p kubernetes-upgrade-766348 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                         │ kubernetes-upgrade-766348 │ jenkins │ v1.37.0 │ 13 Oct 25 22:35 UTC │                     │
	│ start   │ -p kubernetes-upgrade-766348 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                  │ kubernetes-upgrade-766348 │ jenkins │ v1.37.0 │ 13 Oct 25 22:35 UTC │ 13 Oct 25 22:37 UTC │
	│ ssh     │ -p NoKubernetes-794544 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-794544       │ jenkins │ v1.37.0 │ 13 Oct 25 22:36 UTC │                     │
	│ stop    │ -p NoKubernetes-794544                                                                                                                                                                                                                              │ NoKubernetes-794544       │ jenkins │ v1.37.0 │ 13 Oct 25 22:36 UTC │ 13 Oct 25 22:36 UTC │
	│ start   │ -p NoKubernetes-794544 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                          │ NoKubernetes-794544       │ jenkins │ v1.37.0 │ 13 Oct 25 22:36 UTC │ 13 Oct 25 22:36 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-694787 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                                                         │ stopped-upgrade-694787    │ jenkins │ v1.37.0 │ 13 Oct 25 22:36 UTC │                     │
	│ delete  │ -p stopped-upgrade-694787                                                                                                                                                                                                                           │ stopped-upgrade-694787    │ jenkins │ v1.37.0 │ 13 Oct 25 22:36 UTC │ 13 Oct 25 22:36 UTC │
	│ start   │ -p pause-056726 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                 │ pause-056726              │ jenkins │ v1.37.0 │ 13 Oct 25 22:36 UTC │ 13 Oct 25 22:38 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-410631 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                                                         │ running-upgrade-410631    │ jenkins │ v1.37.0 │ 13 Oct 25 22:36 UTC │                     │
	│ delete  │ -p running-upgrade-410631                                                                                                                                                                                                                           │ running-upgrade-410631    │ jenkins │ v1.37.0 │ 13 Oct 25 22:36 UTC │ 13 Oct 25 22:36 UTC │
	│ start   │ -p cert-expiration-591329 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                    │ cert-expiration-591329    │ jenkins │ v1.37.0 │ 13 Oct 25 22:36 UTC │ 13 Oct 25 22:37 UTC │
	│ ssh     │ -p NoKubernetes-794544 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-794544       │ jenkins │ v1.37.0 │ 13 Oct 25 22:36 UTC │                     │
	│ delete  │ -p NoKubernetes-794544                                                                                                                                                                                                                              │ NoKubernetes-794544       │ jenkins │ v1.37.0 │ 13 Oct 25 22:36 UTC │ 13 Oct 25 22:36 UTC │
	│ start   │ -p force-systemd-flag-331035 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                               │ force-systemd-flag-331035 │ jenkins │ v1.37.0 │ 13 Oct 25 22:36 UTC │ 13 Oct 25 22:38 UTC │
	│ delete  │ -p kubernetes-upgrade-766348                                                                                                                                                                                                                        │ kubernetes-upgrade-766348 │ jenkins │ v1.37.0 │ 13 Oct 25 22:37 UTC │ 13 Oct 25 22:37 UTC │
	│ start   │ -p cert-options-746983 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ cert-options-746983       │ jenkins │ v1.37.0 │ 13 Oct 25 22:37 UTC │ 13 Oct 25 22:38 UTC │
	│ ssh     │ force-systemd-flag-331035 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                                │ force-systemd-flag-331035 │ jenkins │ v1.37.0 │ 13 Oct 25 22:38 UTC │ 13 Oct 25 22:38 UTC │
	│ delete  │ -p force-systemd-flag-331035                                                                                                                                                                                                                        │ force-systemd-flag-331035 │ jenkins │ v1.37.0 │ 13 Oct 25 22:38 UTC │ 13 Oct 25 22:38 UTC │
	│ start   │ -p auto-851286 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                   │ auto-851286               │ jenkins │ v1.37.0 │ 13 Oct 25 22:38 UTC │                     │
	│ start   │ -p pause-056726 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                          │ pause-056726              │ jenkins │ v1.37.0 │ 13 Oct 25 22:38 UTC │ 13 Oct 25 22:39 UTC │
	│ ssh     │ cert-options-746983 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-746983       │ jenkins │ v1.37.0 │ 13 Oct 25 22:38 UTC │ 13 Oct 25 22:38 UTC │
	│ ssh     │ -p cert-options-746983 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-746983       │ jenkins │ v1.37.0 │ 13 Oct 25 22:38 UTC │ 13 Oct 25 22:38 UTC │
	│ delete  │ -p cert-options-746983                                                                                                                                                                                                                              │ cert-options-746983       │ jenkins │ v1.37.0 │ 13 Oct 25 22:38 UTC │ 13 Oct 25 22:38 UTC │
	│ start   │ -p flannel-851286 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                  │ flannel-851286            │ jenkins │ v1.37.0 │ 13 Oct 25 22:38 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 22:38:42
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 22:38:42.856352   64655 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:38:42.856626   64655 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:38:42.856636   64655 out.go:374] Setting ErrFile to fd 2...
	I1013 22:38:42.856640   64655 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:38:42.856811   64655 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-15625/.minikube/bin
	I1013 22:38:42.857330   64655 out.go:368] Setting JSON to false
	I1013 22:38:42.858309   64655 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":8471,"bootTime":1760386652,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 22:38:42.858420   64655 start.go:141] virtualization: kvm guest
	I1013 22:38:42.861162   64655 out.go:179] * [flannel-851286] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 22:38:42.862688   64655 notify.go:220] Checking for updates...
	I1013 22:38:42.862717   64655 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:38:42.864349   64655 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:38:42.865845   64655 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-15625/kubeconfig
	I1013 22:38:42.867071   64655 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-15625/.minikube
	I1013 22:38:42.868375   64655 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 22:38:42.869596   64655 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:38:42.871251   64655 config.go:182] Loaded profile config "auto-851286": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:38:42.871372   64655 config.go:182] Loaded profile config "cert-expiration-591329": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:38:42.871528   64655 config.go:182] Loaded profile config "pause-056726": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:38:42.871631   64655 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:38:42.909871   64655 out.go:179] * Using the kvm2 driver based on user configuration
	I1013 22:38:42.911332   64655 start.go:305] selected driver: kvm2
	I1013 22:38:42.911353   64655 start.go:925] validating driver "kvm2" against <nil>
	I1013 22:38:42.911366   64655 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:38:42.912093   64655 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:38:42.912177   64655 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-15625/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 22:38:42.926272   64655 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 22:38:42.926308   64655 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-15625/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 22:38:42.940181   64655 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 22:38:42.940217   64655 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 22:38:42.940516   64655 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:38:42.940546   64655 cni.go:84] Creating CNI manager for "flannel"
	I1013 22:38:42.940553   64655 start_flags.go:336] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1013 22:38:42.940594   64655 start.go:349] cluster config:
	{Name:flannel-851286 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:flannel-851286 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:38:42.940683   64655 iso.go:125] acquiring lock: {Name:mkb744e09089d0ab8a5ae3294003cf719d380bf8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:38:42.942553   64655 out.go:179] * Starting "flannel-851286" primary control-plane node in "flannel-851286" cluster
	I1013 22:38:39.953361   64164 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:38:39.953390   64164 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21724-15625/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-15625/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-15625/.minikube}
	I1013 22:38:39.953416   64164 buildroot.go:174] setting up certificates
	I1013 22:38:39.953440   64164 provision.go:84] configureAuth start
	I1013 22:38:39.953456   64164 main.go:141] libmachine: (auto-851286) Calling .GetMachineName
	I1013 22:38:39.953766   64164 main.go:141] libmachine: (auto-851286) Calling .GetIP
	I1013 22:38:39.957129   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:39.957695   64164 main.go:141] libmachine: (auto-851286) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:7c", ip: ""} in network mk-auto-851286: {Iface:virbr3 ExpiryTime:2025-10-13 23:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:7c Iaid: IPaddr:192.168.83.51 Prefix:24 Hostname:auto-851286 Clientid:01:52:54:00:1f:ac:7c}
	I1013 22:38:39.957724   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined IP address 192.168.83.51 and MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:39.958030   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHHostname
	I1013 22:38:39.961396   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:39.961782   64164 main.go:141] libmachine: (auto-851286) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:7c", ip: ""} in network mk-auto-851286: {Iface:virbr3 ExpiryTime:2025-10-13 23:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:7c Iaid: IPaddr:192.168.83.51 Prefix:24 Hostname:auto-851286 Clientid:01:52:54:00:1f:ac:7c}
	I1013 22:38:39.961799   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined IP address 192.168.83.51 and MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:39.962030   64164 provision.go:143] copyHostCerts
	I1013 22:38:39.962094   64164 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-15625/.minikube/ca.pem, removing ...
	I1013 22:38:39.962116   64164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-15625/.minikube/ca.pem
	I1013 22:38:39.962225   64164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-15625/.minikube/ca.pem (1078 bytes)
	I1013 22:38:39.962375   64164 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-15625/.minikube/cert.pem, removing ...
	I1013 22:38:39.962390   64164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-15625/.minikube/cert.pem
	I1013 22:38:39.962436   64164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-15625/.minikube/cert.pem (1123 bytes)
	I1013 22:38:39.962539   64164 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-15625/.minikube/key.pem, removing ...
	I1013 22:38:39.962553   64164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-15625/.minikube/key.pem
	I1013 22:38:39.962594   64164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-15625/.minikube/key.pem (1675 bytes)
	I1013 22:38:39.962687   64164 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-15625/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca-key.pem org=jenkins.auto-851286 san=[127.0.0.1 192.168.83.51 auto-851286 localhost minikube]
	I1013 22:38:40.244643   64164 provision.go:177] copyRemoteCerts
	I1013 22:38:40.244697   64164 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:38:40.244718   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHHostname
	I1013 22:38:40.248058   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:40.248511   64164 main.go:141] libmachine: (auto-851286) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:7c", ip: ""} in network mk-auto-851286: {Iface:virbr3 ExpiryTime:2025-10-13 23:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:7c Iaid: IPaddr:192.168.83.51 Prefix:24 Hostname:auto-851286 Clientid:01:52:54:00:1f:ac:7c}
	I1013 22:38:40.248540   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined IP address 192.168.83.51 and MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:40.248750   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHPort
	I1013 22:38:40.248964   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHKeyPath
	I1013 22:38:40.249211   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHUsername
	I1013 22:38:40.249380   64164 sshutil.go:53] new ssh client: &{IP:192.168.83.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/auto-851286/id_rsa Username:docker}
	I1013 22:38:40.344712   64164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1013 22:38:40.389329   64164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1013 22:38:40.423888   64164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1013 22:38:40.464194   64164 provision.go:87] duration metric: took 510.734588ms to configureAuth
	I1013 22:38:40.464235   64164 buildroot.go:189] setting minikube options for container-runtime
	I1013 22:38:40.464479   64164 config.go:182] Loaded profile config "auto-851286": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:38:40.464622   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHHostname
	I1013 22:38:40.468367   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:40.469049   64164 main.go:141] libmachine: (auto-851286) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:7c", ip: ""} in network mk-auto-851286: {Iface:virbr3 ExpiryTime:2025-10-13 23:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:7c Iaid: IPaddr:192.168.83.51 Prefix:24 Hostname:auto-851286 Clientid:01:52:54:00:1f:ac:7c}
	I1013 22:38:40.469086   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined IP address 192.168.83.51 and MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:40.469333   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHPort
	I1013 22:38:40.469580   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHKeyPath
	I1013 22:38:40.469760   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHKeyPath
	I1013 22:38:40.469922   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHUsername
	I1013 22:38:40.470086   64164 main.go:141] libmachine: Using SSH client type: native
	I1013 22:38:40.470321   64164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.83.51 22 <nil> <nil>}
	I1013 22:38:40.470337   64164 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:38:40.741468   64164 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:38:40.741492   64164 main.go:141] libmachine: Checking connection to Docker...
	I1013 22:38:40.741503   64164 main.go:141] libmachine: (auto-851286) Calling .GetURL
	I1013 22:38:40.743136   64164 main.go:141] libmachine: (auto-851286) DBG | using libvirt version 8000000
	I1013 22:38:40.746366   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:40.746798   64164 main.go:141] libmachine: (auto-851286) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:7c", ip: ""} in network mk-auto-851286: {Iface:virbr3 ExpiryTime:2025-10-13 23:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:7c Iaid: IPaddr:192.168.83.51 Prefix:24 Hostname:auto-851286 Clientid:01:52:54:00:1f:ac:7c}
	I1013 22:38:40.746830   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined IP address 192.168.83.51 and MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:40.747020   64164 main.go:141] libmachine: Docker is up and running!
	I1013 22:38:40.747038   64164 main.go:141] libmachine: Reticulating splines...
	I1013 22:38:40.747060   64164 client.go:171] duration metric: took 20.685647045s to LocalClient.Create
	I1013 22:38:40.747098   64164 start.go:167] duration metric: took 20.685746671s to libmachine.API.Create "auto-851286"
	I1013 22:38:40.747114   64164 start.go:293] postStartSetup for "auto-851286" (driver="kvm2")
	I1013 22:38:40.747125   64164 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:38:40.747151   64164 main.go:141] libmachine: (auto-851286) Calling .DriverName
	I1013 22:38:40.747438   64164 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:38:40.747468   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHHostname
	I1013 22:38:40.749975   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:40.750443   64164 main.go:141] libmachine: (auto-851286) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:7c", ip: ""} in network mk-auto-851286: {Iface:virbr3 ExpiryTime:2025-10-13 23:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:7c Iaid: IPaddr:192.168.83.51 Prefix:24 Hostname:auto-851286 Clientid:01:52:54:00:1f:ac:7c}
	I1013 22:38:40.750469   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined IP address 192.168.83.51 and MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:40.750631   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHPort
	I1013 22:38:40.750808   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHKeyPath
	I1013 22:38:40.750975   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHUsername
	I1013 22:38:40.751186   64164 sshutil.go:53] new ssh client: &{IP:192.168.83.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/auto-851286/id_rsa Username:docker}
	I1013 22:38:40.843488   64164 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 22:38:40.849898   64164 info.go:137] Remote host: Buildroot 2025.02
	I1013 22:38:40.849932   64164 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-15625/.minikube/addons for local assets ...
	I1013 22:38:40.850021   64164 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-15625/.minikube/files for local assets ...
	I1013 22:38:40.850133   64164 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-15625/.minikube/files/etc/ssl/certs/199472.pem -> 199472.pem in /etc/ssl/certs
	I1013 22:38:40.850273   64164 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 22:38:40.866413   64164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/files/etc/ssl/certs/199472.pem --> /etc/ssl/certs/199472.pem (1708 bytes)
	I1013 22:38:40.910325   64164 start.go:296] duration metric: took 163.195706ms for postStartSetup
	I1013 22:38:40.910372   64164 main.go:141] libmachine: (auto-851286) Calling .GetConfigRaw
	I1013 22:38:40.910965   64164 main.go:141] libmachine: (auto-851286) Calling .GetIP
	I1013 22:38:40.914257   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:40.914777   64164 main.go:141] libmachine: (auto-851286) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:7c", ip: ""} in network mk-auto-851286: {Iface:virbr3 ExpiryTime:2025-10-13 23:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:7c Iaid: IPaddr:192.168.83.51 Prefix:24 Hostname:auto-851286 Clientid:01:52:54:00:1f:ac:7c}
	I1013 22:38:40.914803   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined IP address 192.168.83.51 and MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:40.915146   64164 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/config.json ...
	I1013 22:38:40.916018   64164 start.go:128] duration metric: took 20.873203703s to createHost
	I1013 22:38:40.916051   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHHostname
	I1013 22:38:40.919356   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:40.919766   64164 main.go:141] libmachine: (auto-851286) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:7c", ip: ""} in network mk-auto-851286: {Iface:virbr3 ExpiryTime:2025-10-13 23:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:7c Iaid: IPaddr:192.168.83.51 Prefix:24 Hostname:auto-851286 Clientid:01:52:54:00:1f:ac:7c}
	I1013 22:38:40.919791   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined IP address 192.168.83.51 and MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:40.920006   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHPort
	I1013 22:38:40.920229   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHKeyPath
	I1013 22:38:40.920407   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHKeyPath
	I1013 22:38:40.920603   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHUsername
	I1013 22:38:40.920801   64164 main.go:141] libmachine: Using SSH client type: native
	I1013 22:38:40.921113   64164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.83.51 22 <nil> <nil>}
	I1013 22:38:40.921134   64164 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1013 22:38:41.032251   64164 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760395120.996421859
	
	I1013 22:38:41.032276   64164 fix.go:216] guest clock: 1760395120.996421859
	I1013 22:38:41.032286   64164 fix.go:229] Guest: 2025-10-13 22:38:40.996421859 +0000 UTC Remote: 2025-10-13 22:38:40.916037001 +0000 UTC m=+21.004007806 (delta=80.384858ms)
	I1013 22:38:41.032312   64164 fix.go:200] guest clock delta is within tolerance: 80.384858ms
	I1013 22:38:41.032318   64164 start.go:83] releasing machines lock for "auto-851286", held for 20.989617228s
	I1013 22:38:41.032346   64164 main.go:141] libmachine: (auto-851286) Calling .DriverName
	I1013 22:38:41.032667   64164 main.go:141] libmachine: (auto-851286) Calling .GetIP
	I1013 22:38:41.036080   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:41.036574   64164 main.go:141] libmachine: (auto-851286) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:7c", ip: ""} in network mk-auto-851286: {Iface:virbr3 ExpiryTime:2025-10-13 23:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:7c Iaid: IPaddr:192.168.83.51 Prefix:24 Hostname:auto-851286 Clientid:01:52:54:00:1f:ac:7c}
	I1013 22:38:41.036624   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined IP address 192.168.83.51 and MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:41.036829   64164 main.go:141] libmachine: (auto-851286) Calling .DriverName
	I1013 22:38:41.037476   64164 main.go:141] libmachine: (auto-851286) Calling .DriverName
	I1013 22:38:41.037681   64164 main.go:141] libmachine: (auto-851286) Calling .DriverName
	I1013 22:38:41.037815   64164 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 22:38:41.037864   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHHostname
	I1013 22:38:41.037957   64164 ssh_runner.go:195] Run: cat /version.json
	I1013 22:38:41.037997   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHHostname
	I1013 22:38:41.042243   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:41.042385   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:41.043030   64164 main.go:141] libmachine: (auto-851286) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:7c", ip: ""} in network mk-auto-851286: {Iface:virbr3 ExpiryTime:2025-10-13 23:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:7c Iaid: IPaddr:192.168.83.51 Prefix:24 Hostname:auto-851286 Clientid:01:52:54:00:1f:ac:7c}
	I1013 22:38:41.043087   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined IP address 192.168.83.51 and MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:41.043214   64164 main.go:141] libmachine: (auto-851286) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:7c", ip: ""} in network mk-auto-851286: {Iface:virbr3 ExpiryTime:2025-10-13 23:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:7c Iaid: IPaddr:192.168.83.51 Prefix:24 Hostname:auto-851286 Clientid:01:52:54:00:1f:ac:7c}
	I1013 22:38:41.043231   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined IP address 192.168.83.51 and MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:41.043570   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHPort
	I1013 22:38:41.043782   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHKeyPath
	I1013 22:38:41.043797   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHPort
	I1013 22:38:41.044108   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHKeyPath
	I1013 22:38:41.044148   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHUsername
	I1013 22:38:41.044269   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHUsername
	I1013 22:38:41.044375   64164 sshutil.go:53] new ssh client: &{IP:192.168.83.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/auto-851286/id_rsa Username:docker}
	I1013 22:38:41.044869   64164 sshutil.go:53] new ssh client: &{IP:192.168.83.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/auto-851286/id_rsa Username:docker}
	I1013 22:38:41.127087   64164 ssh_runner.go:195] Run: systemctl --version
	I1013 22:38:41.154041   64164 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 22:38:41.320547   64164 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 22:38:41.328073   64164 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 22:38:41.328153   64164 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 22:38:41.351687   64164 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1013 22:38:41.351715   64164 start.go:495] detecting cgroup driver to use...
	I1013 22:38:41.351792   64164 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 22:38:41.373410   64164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 22:38:41.395305   64164 docker.go:218] disabling cri-docker service (if available) ...
	I1013 22:38:41.395361   64164 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 22:38:41.420896   64164 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 22:38:41.445748   64164 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 22:38:41.640002   64164 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:38:41.904618   64164 docker.go:234] disabling docker service ...
	I1013 22:38:41.904697   64164 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:38:41.924294   64164 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:38:41.941657   64164 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:38:42.131924   64164 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:38:42.308865   64164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:38:42.329401   64164 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:38:42.360868   64164 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 22:38:42.360974   64164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:38:42.374486   64164 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 22:38:42.374553   64164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:38:42.388980   64164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:38:42.402641   64164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:38:42.422324   64164 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:38:42.437799   64164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:38:42.451860   64164 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:38:42.477122   64164 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:38:42.493402   64164 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:38:42.505688   64164 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1013 22:38:42.505757   64164 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1013 22:38:42.530017   64164 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:38:42.546001   64164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:38:42.701505   64164 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:38:42.835287   64164 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:38:42.835365   64164 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:38:42.842655   64164 start.go:563] Will wait 60s for crictl version
	I1013 22:38:42.842722   64164 ssh_runner.go:195] Run: which crictl
	I1013 22:38:42.848671   64164 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1013 22:38:42.904374   64164 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1013 22:38:42.904467   64164 ssh_runner.go:195] Run: crio --version
	I1013 22:38:42.939317   64164 ssh_runner.go:195] Run: crio --version
	I1013 22:38:42.977908   64164 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1013 22:38:41.057601   64307 out.go:252] * Updating the running kvm2 "pause-056726" VM ...
	I1013 22:38:41.057638   64307 machine.go:93] provisionDockerMachine start ...
	I1013 22:38:41.057654   64307 main.go:141] libmachine: (pause-056726) Calling .DriverName
	I1013 22:38:41.057844   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHHostname
	I1013 22:38:41.061178   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:41.061536   64307 main.go:141] libmachine: (pause-056726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:d4:33", ip: ""} in network mk-pause-056726: {Iface:virbr2 ExpiryTime:2025-10-13 23:37:09 +0000 UTC Type:0 Mac:52:54:00:7b:d4:33 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-056726 Clientid:01:52:54:00:7b:d4:33}
	I1013 22:38:41.061574   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined IP address 192.168.50.114 and MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:41.061741   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHPort
	I1013 22:38:41.061937   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:41.062110   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:41.062280   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHUsername
	I1013 22:38:41.062477   64307 main.go:141] libmachine: Using SSH client type: native
	I1013 22:38:41.062726   64307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I1013 22:38:41.062742   64307 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 22:38:41.186066   64307 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-056726
	
	I1013 22:38:41.186102   64307 main.go:141] libmachine: (pause-056726) Calling .GetMachineName
	I1013 22:38:41.186437   64307 buildroot.go:166] provisioning hostname "pause-056726"
	I1013 22:38:41.186470   64307 main.go:141] libmachine: (pause-056726) Calling .GetMachineName
	I1013 22:38:41.186698   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHHostname
	I1013 22:38:41.190353   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:41.190799   64307 main.go:141] libmachine: (pause-056726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:d4:33", ip: ""} in network mk-pause-056726: {Iface:virbr2 ExpiryTime:2025-10-13 23:37:09 +0000 UTC Type:0 Mac:52:54:00:7b:d4:33 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-056726 Clientid:01:52:54:00:7b:d4:33}
	I1013 22:38:41.190830   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined IP address 192.168.50.114 and MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:41.191002   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHPort
	I1013 22:38:41.191218   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:41.191395   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:41.191546   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHUsername
	I1013 22:38:41.191851   64307 main.go:141] libmachine: Using SSH client type: native
	I1013 22:38:41.192120   64307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I1013 22:38:41.192142   64307 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-056726 && echo "pause-056726" | sudo tee /etc/hostname
	I1013 22:38:41.336470   64307 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-056726
	
	I1013 22:38:41.336503   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHHostname
	I1013 22:38:41.340097   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:41.340706   64307 main.go:141] libmachine: (pause-056726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:d4:33", ip: ""} in network mk-pause-056726: {Iface:virbr2 ExpiryTime:2025-10-13 23:37:09 +0000 UTC Type:0 Mac:52:54:00:7b:d4:33 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-056726 Clientid:01:52:54:00:7b:d4:33}
	I1013 22:38:41.340753   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined IP address 192.168.50.114 and MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:41.341057   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHPort
	I1013 22:38:41.341297   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:41.341500   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:41.341718   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHUsername
	I1013 22:38:41.341910   64307 main.go:141] libmachine: Using SSH client type: native
	I1013 22:38:41.342221   64307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I1013 22:38:41.342262   64307 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-056726' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-056726/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-056726' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 22:38:41.465951   64307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:38:41.465998   64307 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21724-15625/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-15625/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-15625/.minikube}
	I1013 22:38:41.466022   64307 buildroot.go:174] setting up certificates
	I1013 22:38:41.466039   64307 provision.go:84] configureAuth start
	I1013 22:38:41.466058   64307 main.go:141] libmachine: (pause-056726) Calling .GetMachineName
	I1013 22:38:41.466350   64307 main.go:141] libmachine: (pause-056726) Calling .GetIP
	I1013 22:38:41.470586   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:41.471088   64307 main.go:141] libmachine: (pause-056726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:d4:33", ip: ""} in network mk-pause-056726: {Iface:virbr2 ExpiryTime:2025-10-13 23:37:09 +0000 UTC Type:0 Mac:52:54:00:7b:d4:33 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-056726 Clientid:01:52:54:00:7b:d4:33}
	I1013 22:38:41.471129   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined IP address 192.168.50.114 and MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:41.471590   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHHostname
	I1013 22:38:41.475221   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:41.475850   64307 main.go:141] libmachine: (pause-056726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:d4:33", ip: ""} in network mk-pause-056726: {Iface:virbr2 ExpiryTime:2025-10-13 23:37:09 +0000 UTC Type:0 Mac:52:54:00:7b:d4:33 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-056726 Clientid:01:52:54:00:7b:d4:33}
	I1013 22:38:41.475880   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined IP address 192.168.50.114 and MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:41.476182   64307 provision.go:143] copyHostCerts
	I1013 22:38:41.476251   64307 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-15625/.minikube/ca.pem, removing ...
	I1013 22:38:41.476272   64307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-15625/.minikube/ca.pem
	I1013 22:38:41.476339   64307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-15625/.minikube/ca.pem (1078 bytes)
	I1013 22:38:41.476489   64307 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-15625/.minikube/cert.pem, removing ...
	I1013 22:38:41.476505   64307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-15625/.minikube/cert.pem
	I1013 22:38:41.476543   64307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-15625/.minikube/cert.pem (1123 bytes)
	I1013 22:38:41.476636   64307 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-15625/.minikube/key.pem, removing ...
	I1013 22:38:41.476649   64307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-15625/.minikube/key.pem
	I1013 22:38:41.476681   64307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-15625/.minikube/key.pem (1675 bytes)
	I1013 22:38:41.476763   64307 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-15625/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca-key.pem org=jenkins.pause-056726 san=[127.0.0.1 192.168.50.114 localhost minikube pause-056726]
	I1013 22:38:41.976552   64307 provision.go:177] copyRemoteCerts
	I1013 22:38:41.976618   64307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:38:41.976659   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHHostname
	I1013 22:38:41.980446   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:41.980969   64307 main.go:141] libmachine: (pause-056726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:d4:33", ip: ""} in network mk-pause-056726: {Iface:virbr2 ExpiryTime:2025-10-13 23:37:09 +0000 UTC Type:0 Mac:52:54:00:7b:d4:33 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-056726 Clientid:01:52:54:00:7b:d4:33}
	I1013 22:38:41.980999   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined IP address 192.168.50.114 and MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:41.981297   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHPort
	I1013 22:38:41.981600   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:41.981786   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHUsername
	I1013 22:38:41.981995   64307 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/pause-056726/id_rsa Username:docker}
	I1013 22:38:42.080693   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1013 22:38:42.128691   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1013 22:38:42.168107   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 22:38:42.205857   64307 provision.go:87] duration metric: took 739.797808ms to configureAuth
	I1013 22:38:42.205917   64307 buildroot.go:189] setting minikube options for container-runtime
	I1013 22:38:42.206211   64307 config.go:182] Loaded profile config "pause-056726": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:38:42.206320   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHHostname
	I1013 22:38:42.213002   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:42.213603   64307 main.go:141] libmachine: (pause-056726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:d4:33", ip: ""} in network mk-pause-056726: {Iface:virbr2 ExpiryTime:2025-10-13 23:37:09 +0000 UTC Type:0 Mac:52:54:00:7b:d4:33 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-056726 Clientid:01:52:54:00:7b:d4:33}
	I1013 22:38:42.213636   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined IP address 192.168.50.114 and MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:42.213913   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHPort
	I1013 22:38:42.214121   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:42.214296   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:42.214418   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHUsername
	I1013 22:38:42.214664   64307 main.go:141] libmachine: Using SSH client type: native
	I1013 22:38:42.214890   64307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I1013 22:38:42.214910   64307 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:38:42.979251   64164 main.go:141] libmachine: (auto-851286) Calling .GetIP
	I1013 22:38:42.982309   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:42.982961   64164 main.go:141] libmachine: (auto-851286) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:7c", ip: ""} in network mk-auto-851286: {Iface:virbr3 ExpiryTime:2025-10-13 23:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:7c Iaid: IPaddr:192.168.83.51 Prefix:24 Hostname:auto-851286 Clientid:01:52:54:00:1f:ac:7c}
	I1013 22:38:42.982993   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined IP address 192.168.83.51 and MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:42.983271   64164 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1013 22:38:42.988581   64164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:38:43.005536   64164 kubeadm.go:883] updating cluster {Name:auto-851286 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:auto-851286 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.51 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:38:43.005631   64164 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:38:43.005677   64164 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:38:43.045223   64164 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1013 22:38:43.045327   64164 ssh_runner.go:195] Run: which lz4
	I1013 22:38:43.050212   64164 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1013 22:38:43.055524   64164 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1013 22:38:43.055559   64164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1013 22:38:44.726045   64164 crio.go:462] duration metric: took 1.675856504s to copy over tarball
	I1013 22:38:44.726111   64164 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1013 22:38:42.943940   64655 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:38:42.943991   64655 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-15625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1013 22:38:42.944015   64655 cache.go:58] Caching tarball of preloaded images
	I1013 22:38:42.944123   64655 preload.go:233] Found /home/jenkins/minikube-integration/21724-15625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1013 22:38:42.944137   64655 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 22:38:42.944280   64655 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/flannel-851286/config.json ...
	I1013 22:38:42.944307   64655 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/flannel-851286/config.json: {Name:mkc044b6dadf0bc28bca7c223da5e424b662028c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:38:42.944480   64655 start.go:360] acquireMachinesLock for flannel-851286: {Name:mk81e7d45b6c30d879e4077cd05b64f26ced767a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1013 22:38:48.109634   64655 start.go:364] duration metric: took 5.165105942s to acquireMachinesLock for "flannel-851286"
	I1013 22:38:48.109708   64655 start.go:93] Provisioning new machine with config: &{Name:flannel-851286 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.34.1 ClusterName:flannel-851286 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:38:48.109833   64655 start.go:125] createHost starting for "" (driver="kvm2")
	I1013 22:38:47.826384   64307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:38:47.826408   64307 machine.go:96] duration metric: took 6.768762066s to provisionDockerMachine
	I1013 22:38:47.826422   64307 start.go:293] postStartSetup for "pause-056726" (driver="kvm2")
	I1013 22:38:47.826434   64307 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:38:47.826454   64307 main.go:141] libmachine: (pause-056726) Calling .DriverName
	I1013 22:38:47.826830   64307 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:38:47.826862   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHHostname
	I1013 22:38:47.830452   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:47.830934   64307 main.go:141] libmachine: (pause-056726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:d4:33", ip: ""} in network mk-pause-056726: {Iface:virbr2 ExpiryTime:2025-10-13 23:37:09 +0000 UTC Type:0 Mac:52:54:00:7b:d4:33 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-056726 Clientid:01:52:54:00:7b:d4:33}
	I1013 22:38:47.830965   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined IP address 192.168.50.114 and MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:47.831171   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHPort
	I1013 22:38:47.831353   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:47.831505   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHUsername
	I1013 22:38:47.831701   64307 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/pause-056726/id_rsa Username:docker}
	I1013 22:38:47.923525   64307 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 22:38:47.929446   64307 info.go:137] Remote host: Buildroot 2025.02
	I1013 22:38:47.929471   64307 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-15625/.minikube/addons for local assets ...
	I1013 22:38:47.929552   64307 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-15625/.minikube/files for local assets ...
	I1013 22:38:47.929654   64307 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-15625/.minikube/files/etc/ssl/certs/199472.pem -> 199472.pem in /etc/ssl/certs
	I1013 22:38:47.929798   64307 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 22:38:47.945141   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/files/etc/ssl/certs/199472.pem --> /etc/ssl/certs/199472.pem (1708 bytes)
	I1013 22:38:47.982748   64307 start.go:296] duration metric: took 156.310071ms for postStartSetup
	I1013 22:38:47.982792   64307 fix.go:56] duration metric: took 6.95032763s for fixHost
	I1013 22:38:47.982816   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHHostname
	I1013 22:38:47.986308   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:47.986786   64307 main.go:141] libmachine: (pause-056726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:d4:33", ip: ""} in network mk-pause-056726: {Iface:virbr2 ExpiryTime:2025-10-13 23:37:09 +0000 UTC Type:0 Mac:52:54:00:7b:d4:33 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-056726 Clientid:01:52:54:00:7b:d4:33}
	I1013 22:38:47.986817   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined IP address 192.168.50.114 and MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:47.987066   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHPort
	I1013 22:38:47.987297   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:47.987484   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:47.987666   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHUsername
	I1013 22:38:47.987856   64307 main.go:141] libmachine: Using SSH client type: native
	I1013 22:38:47.988133   64307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I1013 22:38:47.988149   64307 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1013 22:38:48.109483   64307 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760395128.101107801
	
	I1013 22:38:48.109504   64307 fix.go:216] guest clock: 1760395128.101107801
	I1013 22:38:48.109512   64307 fix.go:229] Guest: 2025-10-13 22:38:48.101107801 +0000 UTC Remote: 2025-10-13 22:38:47.98279722 +0000 UTC m=+24.069035821 (delta=118.310581ms)
	I1013 22:38:48.109537   64307 fix.go:200] guest clock delta is within tolerance: 118.310581ms
	I1013 22:38:48.109544   64307 start.go:83] releasing machines lock for "pause-056726", held for 7.07711387s
	I1013 22:38:48.109575   64307 main.go:141] libmachine: (pause-056726) Calling .DriverName
	I1013 22:38:48.109858   64307 main.go:141] libmachine: (pause-056726) Calling .GetIP
	I1013 22:38:48.113678   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:48.114210   64307 main.go:141] libmachine: (pause-056726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:d4:33", ip: ""} in network mk-pause-056726: {Iface:virbr2 ExpiryTime:2025-10-13 23:37:09 +0000 UTC Type:0 Mac:52:54:00:7b:d4:33 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-056726 Clientid:01:52:54:00:7b:d4:33}
	I1013 22:38:48.114245   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined IP address 192.168.50.114 and MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:48.114431   64307 main.go:141] libmachine: (pause-056726) Calling .DriverName
	I1013 22:38:48.115054   64307 main.go:141] libmachine: (pause-056726) Calling .DriverName
	I1013 22:38:48.115281   64307 main.go:141] libmachine: (pause-056726) Calling .DriverName
	I1013 22:38:48.115402   64307 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 22:38:48.115455   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHHostname
	I1013 22:38:48.115585   64307 ssh_runner.go:195] Run: cat /version.json
	I1013 22:38:48.115610   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHHostname
	I1013 22:38:48.120256   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:48.120941   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:48.121395   64307 main.go:141] libmachine: (pause-056726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:d4:33", ip: ""} in network mk-pause-056726: {Iface:virbr2 ExpiryTime:2025-10-13 23:37:09 +0000 UTC Type:0 Mac:52:54:00:7b:d4:33 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-056726 Clientid:01:52:54:00:7b:d4:33}
	I1013 22:38:48.121420   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined IP address 192.168.50.114 and MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:48.121684   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHPort
	I1013 22:38:48.121714   64307 main.go:141] libmachine: (pause-056726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:d4:33", ip: ""} in network mk-pause-056726: {Iface:virbr2 ExpiryTime:2025-10-13 23:37:09 +0000 UTC Type:0 Mac:52:54:00:7b:d4:33 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-056726 Clientid:01:52:54:00:7b:d4:33}
	I1013 22:38:48.121840   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined IP address 192.168.50.114 and MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:48.122058   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHPort
	I1013 22:38:48.122212   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:48.122373   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:48.122596   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHUsername
	I1013 22:38:48.122603   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHUsername
	I1013 22:38:48.122825   64307 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/pause-056726/id_rsa Username:docker}
	I1013 22:38:48.123254   64307 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/pause-056726/id_rsa Username:docker}
	I1013 22:38:48.209685   64307 ssh_runner.go:195] Run: systemctl --version
	I1013 22:38:48.237877   64307 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 22:38:48.485856   64307 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 22:38:48.496627   64307 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 22:38:48.496704   64307 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 22:38:48.510288   64307 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1013 22:38:48.510318   64307 start.go:495] detecting cgroup driver to use...
	I1013 22:38:48.510400   64307 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 22:38:48.539084   64307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 22:38:48.566554   64307 docker.go:218] disabling cri-docker service (if available) ...
	I1013 22:38:48.566613   64307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 22:38:48.596210   64307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 22:38:48.620854   64307 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 22:38:48.872388   64307 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:38:46.471242   64164 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.745103745s)
	I1013 22:38:46.471285   64164 crio.go:469] duration metric: took 1.745212485s to extract the tarball
	I1013 22:38:46.471308   64164 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1013 22:38:46.519200   64164 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:38:46.567577   64164 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:38:46.567608   64164 cache_images.go:85] Images are preloaded, skipping loading
	I1013 22:38:46.567619   64164 kubeadm.go:934] updating node { 192.168.83.51 8443 v1.34.1 crio true true} ...
	I1013 22:38:46.567737   64164 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-851286 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-851286 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 22:38:46.567814   64164 ssh_runner.go:195] Run: crio config
	I1013 22:38:46.615474   64164 cni.go:84] Creating CNI manager for ""
	I1013 22:38:46.615496   64164 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1013 22:38:46.615514   64164 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 22:38:46.615542   64164 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.51 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-851286 NodeName:auto-851286 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:38:46.615704   64164 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-851286"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.83.51"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.51"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:38:46.615784   64164 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 22:38:46.629422   64164 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 22:38:46.629481   64164 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:38:46.642075   64164 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I1013 22:38:46.663799   64164 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:38:46.685366   64164 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1013 22:38:46.707001   64164 ssh_runner.go:195] Run: grep 192.168.83.51	control-plane.minikube.internal$ /etc/hosts
	I1013 22:38:46.711491   64164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:38:46.726769   64164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:38:46.872142   64164 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:38:46.908760   64164 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286 for IP: 192.168.83.51
	I1013 22:38:46.908796   64164 certs.go:195] generating shared ca certs ...
	I1013 22:38:46.908816   64164 certs.go:227] acquiring lock for ca certs: {Name:mk571ab777fecb8fbabce6e1d2676cf4c099bd41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:38:46.909012   64164 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-15625/.minikube/ca.key
	I1013 22:38:46.909082   64164 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-15625/.minikube/proxy-client-ca.key
	I1013 22:38:46.909097   64164 certs.go:257] generating profile certs ...
	I1013 22:38:46.909218   64164 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/client.key
	I1013 22:38:46.909249   64164 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/client.crt with IP's: []
	I1013 22:38:47.264644   64164 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/client.crt ...
	I1013 22:38:47.264672   64164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/client.crt: {Name:mk96b7d53a24feef47e43abd0db56ae5e7c97ebb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:38:47.264896   64164 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/client.key ...
	I1013 22:38:47.264918   64164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/client.key: {Name:mk8dca3446bef38a09125c2861527c555dd12df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:38:47.265031   64164 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/apiserver.key.ad6549bb
	I1013 22:38:47.265053   64164 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/apiserver.crt.ad6549bb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.83.51]
	I1013 22:38:47.642563   64164 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/apiserver.crt.ad6549bb ...
	I1013 22:38:47.642589   64164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/apiserver.crt.ad6549bb: {Name:mk11487dccab40f9c41f7ba133963c305ed74ea0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:38:47.642786   64164 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/apiserver.key.ad6549bb ...
	I1013 22:38:47.642806   64164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/apiserver.key.ad6549bb: {Name:mkbb291d30accf2e11352db016fe7ab73ad18676 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:38:47.642920   64164 certs.go:382] copying /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/apiserver.crt.ad6549bb -> /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/apiserver.crt
	I1013 22:38:47.643021   64164 certs.go:386] copying /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/apiserver.key.ad6549bb -> /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/apiserver.key
	I1013 22:38:47.643081   64164 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/proxy-client.key
	I1013 22:38:47.643095   64164 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/proxy-client.crt with IP's: []
	I1013 22:38:47.932169   64164 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/proxy-client.crt ...
	I1013 22:38:47.932208   64164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/proxy-client.crt: {Name:mk4c59b0319d203b100d9c1f098dee25bdaa957e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:38:47.932368   64164 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/proxy-client.key ...
	I1013 22:38:47.932380   64164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/proxy-client.key: {Name:mk190c1fdba64fa7d3eb17d89b5a5eebf0a923be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:38:47.932551   64164 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/19947.pem (1338 bytes)
	W1013 22:38:47.932584   64164 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-15625/.minikube/certs/19947_empty.pem, impossibly tiny 0 bytes
	I1013 22:38:47.932594   64164 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 22:38:47.932614   64164 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem (1078 bytes)
	I1013 22:38:47.932636   64164 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:38:47.932657   64164 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/key.pem (1675 bytes)
	I1013 22:38:47.932693   64164 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/files/etc/ssl/certs/199472.pem (1708 bytes)
	I1013 22:38:47.933315   64164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:38:47.970268   64164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 22:38:48.012289   64164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:38:48.050071   64164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 22:38:48.084180   64164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1013 22:38:48.126334   64164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 22:38:48.161717   64164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:38:48.249510   64164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 22:38:48.298551   64164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:38:48.333488   64164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/certs/19947.pem --> /usr/share/ca-certificates/19947.pem (1338 bytes)
	I1013 22:38:48.366929   64164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/files/etc/ssl/certs/199472.pem --> /usr/share/ca-certificates/199472.pem (1708 bytes)
	I1013 22:38:48.414227   64164 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:38:48.438853   64164 ssh_runner.go:195] Run: openssl version
	I1013 22:38:48.446362   64164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:38:48.464050   64164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:38:48.471498   64164 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 21:18 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:38:48.471568   64164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:38:48.480456   64164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:38:48.500956   64164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19947.pem && ln -fs /usr/share/ca-certificates/19947.pem /etc/ssl/certs/19947.pem"
	I1013 22:38:48.521780   64164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19947.pem
	I1013 22:38:48.530204   64164 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:27 /usr/share/ca-certificates/19947.pem
	I1013 22:38:48.530271   64164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19947.pem
	I1013 22:38:48.541579   64164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19947.pem /etc/ssl/certs/51391683.0"
	I1013 22:38:48.568261   64164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/199472.pem && ln -fs /usr/share/ca-certificates/199472.pem /etc/ssl/certs/199472.pem"
	I1013 22:38:48.594755   64164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/199472.pem
	I1013 22:38:48.602350   64164 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:27 /usr/share/ca-certificates/199472.pem
	I1013 22:38:48.602416   64164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/199472.pem
	I1013 22:38:48.616235   64164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/199472.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 22:38:48.635009   64164 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:38:48.640966   64164 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 22:38:48.641035   64164 kubeadm.go:400] StartCluster: {Name:auto-851286 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Clu
sterName:auto-851286 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.51 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:38:48.641134   64164 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:38:48.641264   64164 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:38:48.691555   64164 cri.go:89] found id: ""
	I1013 22:38:48.691636   64164 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 22:38:48.711469   64164 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 22:38:48.725409   64164 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 22:38:48.739704   64164 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 22:38:48.739734   64164 kubeadm.go:157] found existing configuration files:
	
	I1013 22:38:48.739792   64164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 22:38:48.752777   64164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 22:38:48.752854   64164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 22:38:48.771237   64164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 22:38:48.787446   64164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 22:38:48.787514   64164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 22:38:48.804472   64164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 22:38:48.816949   64164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 22:38:48.817036   64164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 22:38:48.829845   64164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 22:38:48.842284   64164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 22:38:48.842359   64164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 22:38:48.857001   64164 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1013 22:38:48.925865   64164 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 22:38:48.925951   64164 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 22:38:49.045123   64164 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 22:38:49.045267   64164 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 22:38:49.045383   64164 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 22:38:49.062212   64164 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 22:38:49.240350   64164 out.go:252]   - Generating certificates and keys ...
	I1013 22:38:49.240442   64164 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 22:38:49.240493   64164 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 22:38:49.240558   64164 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 22:38:49.421424   64164 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 22:38:49.759626   64164 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 22:38:49.088960   64307 docker.go:234] disabling docker service ...
	I1013 22:38:49.089059   64307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:38:49.122978   64307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:38:49.142380   64307 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:38:49.345900   64307 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:38:49.582902   64307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:38:49.603147   64307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:38:49.634419   64307 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 22:38:49.634491   64307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:38:49.649208   64307 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 22:38:49.649288   64307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:38:49.682378   64307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:38:49.704376   64307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:38:49.758297   64307 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:38:49.787167   64307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:38:49.820057   64307 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:38:49.843948   64307 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:38:49.878037   64307 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:38:49.905531   64307 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:38:49.925073   64307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:38:50.298279   64307 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:38:50.864747   64307 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:38:50.864846   64307 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:38:50.873254   64307 start.go:563] Will wait 60s for crictl version
	I1013 22:38:50.873323   64307 ssh_runner.go:195] Run: which crictl
	I1013 22:38:50.880216   64307 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1013 22:38:50.931241   64307 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1013 22:38:50.931319   64307 ssh_runner.go:195] Run: crio --version
	I1013 22:38:50.968087   64307 ssh_runner.go:195] Run: crio --version
	I1013 22:38:51.010888   64307 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1013 22:38:48.236403   64655 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1013 22:38:48.236656   64655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 22:38:48.236749   64655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 22:38:48.256345   64655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41131
	I1013 22:38:48.256993   64655 main.go:141] libmachine: () Calling .GetVersion
	I1013 22:38:48.257682   64655 main.go:141] libmachine: Using API Version  1
	I1013 22:38:48.257717   64655 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 22:38:48.258151   64655 main.go:141] libmachine: () Calling .GetMachineName
	I1013 22:38:48.258342   64655 main.go:141] libmachine: (flannel-851286) Calling .GetMachineName
	I1013 22:38:48.258505   64655 main.go:141] libmachine: (flannel-851286) Calling .DriverName
	I1013 22:38:48.258684   64655 start.go:159] libmachine.API.Create for "flannel-851286" (driver="kvm2")
	I1013 22:38:48.258716   64655 client.go:168] LocalClient.Create starting
	I1013 22:38:48.258751   64655 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem
	I1013 22:38:48.258799   64655 main.go:141] libmachine: Decoding PEM data...
	I1013 22:38:48.258823   64655 main.go:141] libmachine: Parsing certificate...
	I1013 22:38:48.258937   64655 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-15625/.minikube/certs/cert.pem
	I1013 22:38:48.258969   64655 main.go:141] libmachine: Decoding PEM data...
	I1013 22:38:48.258984   64655 main.go:141] libmachine: Parsing certificate...
	I1013 22:38:48.259010   64655 main.go:141] libmachine: Running pre-create checks...
	I1013 22:38:48.259022   64655 main.go:141] libmachine: (flannel-851286) Calling .PreCreateCheck
	I1013 22:38:48.259450   64655 main.go:141] libmachine: (flannel-851286) Calling .GetConfigRaw
	I1013 22:38:48.259939   64655 main.go:141] libmachine: Creating machine...
	I1013 22:38:48.259952   64655 main.go:141] libmachine: (flannel-851286) Calling .Create
	I1013 22:38:48.260128   64655 main.go:141] libmachine: (flannel-851286) creating domain...
	I1013 22:38:48.260148   64655 main.go:141] libmachine: (flannel-851286) creating network...
	I1013 22:38:48.261921   64655 main.go:141] libmachine: (flannel-851286) DBG | found existing default network
	I1013 22:38:48.262092   64655 main.go:141] libmachine: (flannel-851286) DBG | <network connections='3'>
	I1013 22:38:48.262110   64655 main.go:141] libmachine: (flannel-851286) DBG |   <name>default</name>
	I1013 22:38:48.262132   64655 main.go:141] libmachine: (flannel-851286) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1013 22:38:48.262142   64655 main.go:141] libmachine: (flannel-851286) DBG |   <forward mode='nat'>
	I1013 22:38:48.262150   64655 main.go:141] libmachine: (flannel-851286) DBG |     <nat>
	I1013 22:38:48.262185   64655 main.go:141] libmachine: (flannel-851286) DBG |       <port start='1024' end='65535'/>
	I1013 22:38:48.262223   64655 main.go:141] libmachine: (flannel-851286) DBG |     </nat>
	I1013 22:38:48.262243   64655 main.go:141] libmachine: (flannel-851286) DBG |   </forward>
	I1013 22:38:48.262272   64655 main.go:141] libmachine: (flannel-851286) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1013 22:38:48.262287   64655 main.go:141] libmachine: (flannel-851286) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1013 22:38:48.262301   64655 main.go:141] libmachine: (flannel-851286) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1013 22:38:48.262307   64655 main.go:141] libmachine: (flannel-851286) DBG |     <dhcp>
	I1013 22:38:48.262317   64655 main.go:141] libmachine: (flannel-851286) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1013 22:38:48.262324   64655 main.go:141] libmachine: (flannel-851286) DBG |     </dhcp>
	I1013 22:38:48.262352   64655 main.go:141] libmachine: (flannel-851286) DBG |   </ip>
	I1013 22:38:48.262363   64655 main.go:141] libmachine: (flannel-851286) DBG | </network>
	I1013 22:38:48.262375   64655 main.go:141] libmachine: (flannel-851286) DBG | 
	I1013 22:38:48.263745   64655 main.go:141] libmachine: (flannel-851286) DBG | I1013 22:38:48.263542   64716 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000201390}
	I1013 22:38:48.263776   64655 main.go:141] libmachine: (flannel-851286) DBG | defining private network:
	I1013 22:38:48.263788   64655 main.go:141] libmachine: (flannel-851286) DBG | 
	I1013 22:38:48.263795   64655 main.go:141] libmachine: (flannel-851286) DBG | <network>
	I1013 22:38:48.263803   64655 main.go:141] libmachine: (flannel-851286) DBG |   <name>mk-flannel-851286</name>
	I1013 22:38:48.263809   64655 main.go:141] libmachine: (flannel-851286) DBG |   <dns enable='no'/>
	I1013 22:38:48.263818   64655 main.go:141] libmachine: (flannel-851286) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1013 22:38:48.263825   64655 main.go:141] libmachine: (flannel-851286) DBG |     <dhcp>
	I1013 22:38:48.263835   64655 main.go:141] libmachine: (flannel-851286) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1013 22:38:48.263841   64655 main.go:141] libmachine: (flannel-851286) DBG |     </dhcp>
	I1013 22:38:48.263849   64655 main.go:141] libmachine: (flannel-851286) DBG |   </ip>
	I1013 22:38:48.263856   64655 main.go:141] libmachine: (flannel-851286) DBG | </network>
	I1013 22:38:48.263864   64655 main.go:141] libmachine: (flannel-851286) DBG | 
	I1013 22:38:48.395487   64655 main.go:141] libmachine: (flannel-851286) DBG | creating private network mk-flannel-851286 192.168.39.0/24...
	I1013 22:38:48.521067   64655 main.go:141] libmachine: (flannel-851286) DBG | private network mk-flannel-851286 192.168.39.0/24 created
	I1013 22:38:48.521329   64655 main.go:141] libmachine: (flannel-851286) DBG | <network>
	I1013 22:38:48.521369   64655 main.go:141] libmachine: (flannel-851286) DBG |   <name>mk-flannel-851286</name>
	I1013 22:38:48.521382   64655 main.go:141] libmachine: (flannel-851286) setting up store path in /home/jenkins/minikube-integration/21724-15625/.minikube/machines/flannel-851286 ...
	I1013 22:38:48.521403   64655 main.go:141] libmachine: (flannel-851286) building disk image from file:///home/jenkins/minikube-integration/21724-15625/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1013 22:38:48.521417   64655 main.go:141] libmachine: (flannel-851286) DBG |   <uuid>43f9a973-809d-485b-96ce-d0273013c796</uuid>
	I1013 22:38:48.521425   64655 main.go:141] libmachine: (flannel-851286) DBG |   <bridge name='virbr4' stp='on' delay='0'/>
	I1013 22:38:48.521436   64655 main.go:141] libmachine: (flannel-851286) DBG |   <mac address='52:54:00:06:2d:52'/>
	I1013 22:38:48.521457   64655 main.go:141] libmachine: (flannel-851286) Downloading /home/jenkins/minikube-integration/21724-15625/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21724-15625/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I1013 22:38:48.521471   64655 main.go:141] libmachine: (flannel-851286) DBG |   <dns enable='no'/>
	I1013 22:38:48.521482   64655 main.go:141] libmachine: (flannel-851286) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1013 22:38:48.521493   64655 main.go:141] libmachine: (flannel-851286) DBG |     <dhcp>
	I1013 22:38:48.521505   64655 main.go:141] libmachine: (flannel-851286) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1013 22:38:48.521514   64655 main.go:141] libmachine: (flannel-851286) DBG |     </dhcp>
	I1013 22:38:48.521522   64655 main.go:141] libmachine: (flannel-851286) DBG |   </ip>
	I1013 22:38:48.521531   64655 main.go:141] libmachine: (flannel-851286) DBG | </network>
	I1013 22:38:48.521548   64655 main.go:141] libmachine: (flannel-851286) DBG | 
	I1013 22:38:48.521566   64655 main.go:141] libmachine: (flannel-851286) DBG | I1013 22:38:48.521324   64716 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21724-15625/.minikube
	I1013 22:38:49.760089   64655 main.go:141] libmachine: (flannel-851286) DBG | I1013 22:38:49.759950   64716 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21724-15625/.minikube/machines/flannel-851286/id_rsa...
	I1013 22:38:50.036591   64655 main.go:141] libmachine: (flannel-851286) DBG | I1013 22:38:50.036428   64716 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21724-15625/.minikube/machines/flannel-851286/flannel-851286.rawdisk...
	I1013 22:38:50.036628   64655 main.go:141] libmachine: (flannel-851286) DBG | Writing magic tar header
	I1013 22:38:50.036642   64655 main.go:141] libmachine: (flannel-851286) DBG | Writing SSH key tar header
	I1013 22:38:50.036655   64655 main.go:141] libmachine: (flannel-851286) DBG | I1013 22:38:50.036580   64716 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21724-15625/.minikube/machines/flannel-851286 ...
	I1013 22:38:50.036760   64655 main.go:141] libmachine: (flannel-851286) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-15625/.minikube/machines/flannel-851286
	I1013 22:38:50.036786   64655 main.go:141] libmachine: (flannel-851286) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-15625/.minikube/machines
	I1013 22:38:50.036800   64655 main.go:141] libmachine: (flannel-851286) setting executable bit set on /home/jenkins/minikube-integration/21724-15625/.minikube/machines/flannel-851286 (perms=drwx------)
	I1013 22:38:50.036831   64655 main.go:141] libmachine: (flannel-851286) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-15625/.minikube
	I1013 22:38:50.036873   64655 main.go:141] libmachine: (flannel-851286) setting executable bit set on /home/jenkins/minikube-integration/21724-15625/.minikube/machines (perms=drwxr-xr-x)
	I1013 22:38:50.036889   64655 main.go:141] libmachine: (flannel-851286) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-15625
	I1013 22:38:50.036905   64655 main.go:141] libmachine: (flannel-851286) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1013 22:38:50.036919   64655 main.go:141] libmachine: (flannel-851286) DBG | checking permissions on dir: /home/jenkins
	I1013 22:38:50.036929   64655 main.go:141] libmachine: (flannel-851286) DBG | checking permissions on dir: /home
	I1013 22:38:50.036936   64655 main.go:141] libmachine: (flannel-851286) DBG | skipping /home - not owner
	I1013 22:38:50.036980   64655 main.go:141] libmachine: (flannel-851286) setting executable bit set on /home/jenkins/minikube-integration/21724-15625/.minikube (perms=drwxr-xr-x)
	I1013 22:38:50.037002   64655 main.go:141] libmachine: (flannel-851286) setting executable bit set on /home/jenkins/minikube-integration/21724-15625 (perms=drwxrwxr-x)
	I1013 22:38:50.037018   64655 main.go:141] libmachine: (flannel-851286) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1013 22:38:50.037031   64655 main.go:141] libmachine: (flannel-851286) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1013 22:38:50.037046   64655 main.go:141] libmachine: (flannel-851286) defining domain...
	I1013 22:38:50.038631   64655 main.go:141] libmachine: (flannel-851286) defining domain using XML: 
	I1013 22:38:50.038648   64655 main.go:141] libmachine: (flannel-851286) <domain type='kvm'>
	I1013 22:38:50.038680   64655 main.go:141] libmachine: (flannel-851286)   <name>flannel-851286</name>
	I1013 22:38:50.038725   64655 main.go:141] libmachine: (flannel-851286)   <memory unit='MiB'>3072</memory>
	I1013 22:38:50.038737   64655 main.go:141] libmachine: (flannel-851286)   <vcpu>2</vcpu>
	I1013 22:38:50.038756   64655 main.go:141] libmachine: (flannel-851286)   <features>
	I1013 22:38:50.038766   64655 main.go:141] libmachine: (flannel-851286)     <acpi/>
	I1013 22:38:50.038773   64655 main.go:141] libmachine: (flannel-851286)     <apic/>
	I1013 22:38:50.038802   64655 main.go:141] libmachine: (flannel-851286)     <pae/>
	I1013 22:38:50.038815   64655 main.go:141] libmachine: (flannel-851286)   </features>
	I1013 22:38:50.038825   64655 main.go:141] libmachine: (flannel-851286)   <cpu mode='host-passthrough'>
	I1013 22:38:50.038836   64655 main.go:141] libmachine: (flannel-851286)   </cpu>
	I1013 22:38:50.038844   64655 main.go:141] libmachine: (flannel-851286)   <os>
	I1013 22:38:50.038858   64655 main.go:141] libmachine: (flannel-851286)     <type>hvm</type>
	I1013 22:38:50.038865   64655 main.go:141] libmachine: (flannel-851286)     <boot dev='cdrom'/>
	I1013 22:38:50.038881   64655 main.go:141] libmachine: (flannel-851286)     <boot dev='hd'/>
	I1013 22:38:50.038890   64655 main.go:141] libmachine: (flannel-851286)     <bootmenu enable='no'/>
	I1013 22:38:50.038902   64655 main.go:141] libmachine: (flannel-851286)   </os>
	I1013 22:38:50.038911   64655 main.go:141] libmachine: (flannel-851286)   <devices>
	I1013 22:38:50.038933   64655 main.go:141] libmachine: (flannel-851286)     <disk type='file' device='cdrom'>
	I1013 22:38:50.038951   64655 main.go:141] libmachine: (flannel-851286)       <source file='/home/jenkins/minikube-integration/21724-15625/.minikube/machines/flannel-851286/boot2docker.iso'/>
	I1013 22:38:50.038959   64655 main.go:141] libmachine: (flannel-851286)       <target dev='hdc' bus='scsi'/>
	I1013 22:38:50.038966   64655 main.go:141] libmachine: (flannel-851286)       <readonly/>
	I1013 22:38:50.038972   64655 main.go:141] libmachine: (flannel-851286)     </disk>
	I1013 22:38:50.039004   64655 main.go:141] libmachine: (flannel-851286)     <disk type='file' device='disk'>
	I1013 22:38:50.039030   64655 main.go:141] libmachine: (flannel-851286)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1013 22:38:50.039063   64655 main.go:141] libmachine: (flannel-851286)       <source file='/home/jenkins/minikube-integration/21724-15625/.minikube/machines/flannel-851286/flannel-851286.rawdisk'/>
	I1013 22:38:50.039077   64655 main.go:141] libmachine: (flannel-851286)       <target dev='hda' bus='virtio'/>
	I1013 22:38:50.039087   64655 main.go:141] libmachine: (flannel-851286)     </disk>
	I1013 22:38:50.039095   64655 main.go:141] libmachine: (flannel-851286)     <interface type='network'>
	I1013 22:38:50.039110   64655 main.go:141] libmachine: (flannel-851286)       <source network='mk-flannel-851286'/>
	I1013 22:38:50.039119   64655 main.go:141] libmachine: (flannel-851286)       <model type='virtio'/>
	I1013 22:38:50.039145   64655 main.go:141] libmachine: (flannel-851286)     </interface>
	I1013 22:38:50.039172   64655 main.go:141] libmachine: (flannel-851286)     <interface type='network'>
	I1013 22:38:50.039181   64655 main.go:141] libmachine: (flannel-851286)       <source network='default'/>
	I1013 22:38:50.039209   64655 main.go:141] libmachine: (flannel-851286)       <model type='virtio'/>
	I1013 22:38:50.039228   64655 main.go:141] libmachine: (flannel-851286)     </interface>
	I1013 22:38:50.039258   64655 main.go:141] libmachine: (flannel-851286)     <serial type='pty'>
	I1013 22:38:50.039269   64655 main.go:141] libmachine: (flannel-851286)       <target port='0'/>
	I1013 22:38:50.039278   64655 main.go:141] libmachine: (flannel-851286)     </serial>
	I1013 22:38:50.039303   64655 main.go:141] libmachine: (flannel-851286)     <console type='pty'>
	I1013 22:38:50.039312   64655 main.go:141] libmachine: (flannel-851286)       <target type='serial' port='0'/>
	I1013 22:38:50.039318   64655 main.go:141] libmachine: (flannel-851286)     </console>
	I1013 22:38:50.039326   64655 main.go:141] libmachine: (flannel-851286)     <rng model='virtio'>
	I1013 22:38:50.039334   64655 main.go:141] libmachine: (flannel-851286)       <backend model='random'>/dev/random</backend>
	I1013 22:38:50.039342   64655 main.go:141] libmachine: (flannel-851286)     </rng>
	I1013 22:38:50.039348   64655 main.go:141] libmachine: (flannel-851286)   </devices>
	I1013 22:38:50.039355   64655 main.go:141] libmachine: (flannel-851286) </domain>
	I1013 22:38:50.039361   64655 main.go:141] libmachine: (flannel-851286) 
	I1013 22:38:50.131891   64655 main.go:141] libmachine: (flannel-851286) DBG | domain flannel-851286 has defined MAC address 52:54:00:c6:f4:70 in network default
	I1013 22:38:50.132651   64655 main.go:141] libmachine: (flannel-851286) starting domain...
	I1013 22:38:50.132694   64655 main.go:141] libmachine: (flannel-851286) DBG | domain flannel-851286 has defined MAC address 52:54:00:49:d1:5b in network mk-flannel-851286
	I1013 22:38:50.132703   64655 main.go:141] libmachine: (flannel-851286) ensuring networks are active...
	I1013 22:38:50.133591   64655 main.go:141] libmachine: (flannel-851286) Ensuring network default is active
	I1013 22:38:50.134010   64655 main.go:141] libmachine: (flannel-851286) Ensuring network mk-flannel-851286 is active
	I1013 22:38:50.134894   64655 main.go:141] libmachine: (flannel-851286) getting domain XML...
	I1013 22:38:50.136194   64655 main.go:141] libmachine: (flannel-851286) DBG | starting domain XML:
	I1013 22:38:50.136218   64655 main.go:141] libmachine: (flannel-851286) DBG | <domain type='kvm'>
	I1013 22:38:50.136229   64655 main.go:141] libmachine: (flannel-851286) DBG |   <name>flannel-851286</name>
	I1013 22:38:50.136238   64655 main.go:141] libmachine: (flannel-851286) DBG |   <uuid>7ab29013-61ac-4ddf-a05f-47c403c9b522</uuid>
	I1013 22:38:50.136248   64655 main.go:141] libmachine: (flannel-851286) DBG |   <memory unit='KiB'>3145728</memory>
	I1013 22:38:50.136267   64655 main.go:141] libmachine: (flannel-851286) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1013 22:38:50.136278   64655 main.go:141] libmachine: (flannel-851286) DBG |   <vcpu placement='static'>2</vcpu>
	I1013 22:38:50.136283   64655 main.go:141] libmachine: (flannel-851286) DBG |   <os>
	I1013 22:38:50.136294   64655 main.go:141] libmachine: (flannel-851286) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1013 22:38:50.136301   64655 main.go:141] libmachine: (flannel-851286) DBG |     <boot dev='cdrom'/>
	I1013 22:38:50.136322   64655 main.go:141] libmachine: (flannel-851286) DBG |     <boot dev='hd'/>
	I1013 22:38:50.136329   64655 main.go:141] libmachine: (flannel-851286) DBG |     <bootmenu enable='no'/>
	I1013 22:38:50.136340   64655 main.go:141] libmachine: (flannel-851286) DBG |   </os>
	I1013 22:38:50.136356   64655 main.go:141] libmachine: (flannel-851286) DBG |   <features>
	I1013 22:38:50.136367   64655 main.go:141] libmachine: (flannel-851286) DBG |     <acpi/>
	I1013 22:38:50.136375   64655 main.go:141] libmachine: (flannel-851286) DBG |     <apic/>
	I1013 22:38:50.136387   64655 main.go:141] libmachine: (flannel-851286) DBG |     <pae/>
	I1013 22:38:50.136396   64655 main.go:141] libmachine: (flannel-851286) DBG |   </features>
	I1013 22:38:50.136407   64655 main.go:141] libmachine: (flannel-851286) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1013 22:38:50.136426   64655 main.go:141] libmachine: (flannel-851286) DBG |   <clock offset='utc'/>
	I1013 22:38:50.136438   64655 main.go:141] libmachine: (flannel-851286) DBG |   <on_poweroff>destroy</on_poweroff>
	I1013 22:38:50.136454   64655 main.go:141] libmachine: (flannel-851286) DBG |   <on_reboot>restart</on_reboot>
	I1013 22:38:50.136486   64655 main.go:141] libmachine: (flannel-851286) DBG |   <on_crash>destroy</on_crash>
	I1013 22:38:50.136509   64655 main.go:141] libmachine: (flannel-851286) DBG |   <devices>
	I1013 22:38:50.136521   64655 main.go:141] libmachine: (flannel-851286) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1013 22:38:50.136531   64655 main.go:141] libmachine: (flannel-851286) DBG |     <disk type='file' device='cdrom'>
	I1013 22:38:50.136542   64655 main.go:141] libmachine: (flannel-851286) DBG |       <driver name='qemu' type='raw'/>
	I1013 22:38:50.136558   64655 main.go:141] libmachine: (flannel-851286) DBG |       <source file='/home/jenkins/minikube-integration/21724-15625/.minikube/machines/flannel-851286/boot2docker.iso'/>
	I1013 22:38:50.136571   64655 main.go:141] libmachine: (flannel-851286) DBG |       <target dev='hdc' bus='scsi'/>
	I1013 22:38:50.136583   64655 main.go:141] libmachine: (flannel-851286) DBG |       <readonly/>
	I1013 22:38:50.136595   64655 main.go:141] libmachine: (flannel-851286) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1013 22:38:50.136606   64655 main.go:141] libmachine: (flannel-851286) DBG |     </disk>
	I1013 22:38:50.136615   64655 main.go:141] libmachine: (flannel-851286) DBG |     <disk type='file' device='disk'>
	I1013 22:38:50.136624   64655 main.go:141] libmachine: (flannel-851286) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1013 22:38:50.136637   64655 main.go:141] libmachine: (flannel-851286) DBG |       <source file='/home/jenkins/minikube-integration/21724-15625/.minikube/machines/flannel-851286/flannel-851286.rawdisk'/>
	I1013 22:38:50.136646   64655 main.go:141] libmachine: (flannel-851286) DBG |       <target dev='hda' bus='virtio'/>
	I1013 22:38:50.136661   64655 main.go:141] libmachine: (flannel-851286) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1013 22:38:50.136669   64655 main.go:141] libmachine: (flannel-851286) DBG |     </disk>
	I1013 22:38:50.136679   64655 main.go:141] libmachine: (flannel-851286) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1013 22:38:50.136688   64655 main.go:141] libmachine: (flannel-851286) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1013 22:38:50.136696   64655 main.go:141] libmachine: (flannel-851286) DBG |     </controller>
	I1013 22:38:50.136709   64655 main.go:141] libmachine: (flannel-851286) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1013 22:38:50.136719   64655 main.go:141] libmachine: (flannel-851286) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1013 22:38:50.136736   64655 main.go:141] libmachine: (flannel-851286) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1013 22:38:50.136746   64655 main.go:141] libmachine: (flannel-851286) DBG |     </controller>
	I1013 22:38:50.136757   64655 main.go:141] libmachine: (flannel-851286) DBG |     <interface type='network'>
	I1013 22:38:50.136767   64655 main.go:141] libmachine: (flannel-851286) DBG |       <mac address='52:54:00:49:d1:5b'/>
	I1013 22:38:50.136777   64655 main.go:141] libmachine: (flannel-851286) DBG |       <source network='mk-flannel-851286'/>
	I1013 22:38:50.136787   64655 main.go:141] libmachine: (flannel-851286) DBG |       <model type='virtio'/>
	I1013 22:38:50.136799   64655 main.go:141] libmachine: (flannel-851286) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1013 22:38:50.136812   64655 main.go:141] libmachine: (flannel-851286) DBG |     </interface>
	I1013 22:38:50.136824   64655 main.go:141] libmachine: (flannel-851286) DBG |     <interface type='network'>
	I1013 22:38:50.136837   64655 main.go:141] libmachine: (flannel-851286) DBG |       <mac address='52:54:00:c6:f4:70'/>
	I1013 22:38:50.136860   64655 main.go:141] libmachine: (flannel-851286) DBG |       <source network='default'/>
	I1013 22:38:50.136870   64655 main.go:141] libmachine: (flannel-851286) DBG |       <model type='virtio'/>
	I1013 22:38:50.136879   64655 main.go:141] libmachine: (flannel-851286) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1013 22:38:50.136894   64655 main.go:141] libmachine: (flannel-851286) DBG |     </interface>
	I1013 22:38:50.136912   64655 main.go:141] libmachine: (flannel-851286) DBG |     <serial type='pty'>
	I1013 22:38:50.136925   64655 main.go:141] libmachine: (flannel-851286) DBG |       <target type='isa-serial' port='0'>
	I1013 22:38:50.136943   64655 main.go:141] libmachine: (flannel-851286) DBG |         <model name='isa-serial'/>
	I1013 22:38:50.136954   64655 main.go:141] libmachine: (flannel-851286) DBG |       </target>
	I1013 22:38:50.136959   64655 main.go:141] libmachine: (flannel-851286) DBG |     </serial>
	I1013 22:38:50.136964   64655 main.go:141] libmachine: (flannel-851286) DBG |     <console type='pty'>
	I1013 22:38:50.136977   64655 main.go:141] libmachine: (flannel-851286) DBG |       <target type='serial' port='0'/>
	I1013 22:38:50.136985   64655 main.go:141] libmachine: (flannel-851286) DBG |     </console>
	I1013 22:38:50.136992   64655 main.go:141] libmachine: (flannel-851286) DBG |     <input type='mouse' bus='ps2'/>
	I1013 22:38:50.137000   64655 main.go:141] libmachine: (flannel-851286) DBG |     <input type='keyboard' bus='ps2'/>
	I1013 22:38:50.137007   64655 main.go:141] libmachine: (flannel-851286) DBG |     <audio id='1' type='none'/>
	I1013 22:38:50.137026   64655 main.go:141] libmachine: (flannel-851286) DBG |     <memballoon model='virtio'>
	I1013 22:38:50.137040   64655 main.go:141] libmachine: (flannel-851286) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1013 22:38:50.137053   64655 main.go:141] libmachine: (flannel-851286) DBG |     </memballoon>
	I1013 22:38:50.137062   64655 main.go:141] libmachine: (flannel-851286) DBG |     <rng model='virtio'>
	I1013 22:38:50.137074   64655 main.go:141] libmachine: (flannel-851286) DBG |       <backend model='random'>/dev/random</backend>
	I1013 22:38:50.137087   64655 main.go:141] libmachine: (flannel-851286) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1013 22:38:50.137099   64655 main.go:141] libmachine: (flannel-851286) DBG |     </rng>
	I1013 22:38:50.137109   64655 main.go:141] libmachine: (flannel-851286) DBG |   </devices>
	I1013 22:38:50.137119   64655 main.go:141] libmachine: (flannel-851286) DBG | </domain>
	I1013 22:38:50.137129   64655 main.go:141] libmachine: (flannel-851286) DBG | 
	I1013 22:38:51.640639   64655 main.go:141] libmachine: (flannel-851286) waiting for domain to start...
	I1013 22:38:51.642323   64655 main.go:141] libmachine: (flannel-851286) domain is now running
	I1013 22:38:51.642352   64655 main.go:141] libmachine: (flannel-851286) waiting for IP...
	I1013 22:38:51.643327   64655 main.go:141] libmachine: (flannel-851286) DBG | domain flannel-851286 has defined MAC address 52:54:00:49:d1:5b in network mk-flannel-851286
	I1013 22:38:51.644137   64655 main.go:141] libmachine: (flannel-851286) DBG | no network interface addresses found for domain flannel-851286 (source=lease)
	I1013 22:38:51.644175   64655 main.go:141] libmachine: (flannel-851286) DBG | trying to list again with source=arp
	I1013 22:38:51.644703   64655 main.go:141] libmachine: (flannel-851286) DBG | unable to find current IP address of domain flannel-851286 in network mk-flannel-851286 (interfaces detected: [])
	I1013 22:38:51.644755   64655 main.go:141] libmachine: (flannel-851286) DBG | I1013 22:38:51.644696   64716 retry.go:31] will retry after 255.96681ms: waiting for domain to come up
	I1013 22:38:51.902641   64655 main.go:141] libmachine: (flannel-851286) DBG | domain flannel-851286 has defined MAC address 52:54:00:49:d1:5b in network mk-flannel-851286
	I1013 22:38:51.903410   64655 main.go:141] libmachine: (flannel-851286) DBG | no network interface addresses found for domain flannel-851286 (source=lease)
	I1013 22:38:51.903435   64655 main.go:141] libmachine: (flannel-851286) DBG | trying to list again with source=arp
	I1013 22:38:51.903839   64655 main.go:141] libmachine: (flannel-851286) DBG | unable to find current IP address of domain flannel-851286 in network mk-flannel-851286 (interfaces detected: [])
	I1013 22:38:51.903872   64655 main.go:141] libmachine: (flannel-851286) DBG | I1013 22:38:51.903816   64716 retry.go:31] will retry after 290.474278ms: waiting for domain to come up
	I1013 22:38:52.196591   64655 main.go:141] libmachine: (flannel-851286) DBG | domain flannel-851286 has defined MAC address 52:54:00:49:d1:5b in network mk-flannel-851286
	I1013 22:38:52.197345   64655 main.go:141] libmachine: (flannel-851286) DBG | no network interface addresses found for domain flannel-851286 (source=lease)
	I1013 22:38:52.197376   64655 main.go:141] libmachine: (flannel-851286) DBG | trying to list again with source=arp
	I1013 22:38:52.197783   64655 main.go:141] libmachine: (flannel-851286) DBG | unable to find current IP address of domain flannel-851286 in network mk-flannel-851286 (interfaces detected: [])
	I1013 22:38:52.197815   64655 main.go:141] libmachine: (flannel-851286) DBG | I1013 22:38:52.197756   64716 retry.go:31] will retry after 318.393842ms: waiting for domain to come up
	I1013 22:38:52.518663   64655 main.go:141] libmachine: (flannel-851286) DBG | domain flannel-851286 has defined MAC address 52:54:00:49:d1:5b in network mk-flannel-851286
	I1013 22:38:52.519447   64655 main.go:141] libmachine: (flannel-851286) DBG | no network interface addresses found for domain flannel-851286 (source=lease)
	I1013 22:38:52.519479   64655 main.go:141] libmachine: (flannel-851286) DBG | trying to list again with source=arp
	I1013 22:38:52.519866   64655 main.go:141] libmachine: (flannel-851286) DBG | unable to find current IP address of domain flannel-851286 in network mk-flannel-851286 (interfaces detected: [])
	I1013 22:38:52.519939   64655 main.go:141] libmachine: (flannel-851286) DBG | I1013 22:38:52.519853   64716 retry.go:31] will retry after 485.032894ms: waiting for domain to come up
	I1013 22:38:50.001323   64164 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 22:38:50.248683   64164 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 22:38:50.248873   64164 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [auto-851286 localhost] and IPs [192.168.83.51 127.0.0.1 ::1]
	I1013 22:38:50.673686   64164 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 22:38:50.673877   64164 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [auto-851286 localhost] and IPs [192.168.83.51 127.0.0.1 ::1]
	I1013 22:38:51.128872   64164 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 22:38:51.327351   64164 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 22:38:51.963053   64164 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 22:38:51.963278   64164 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 22:38:52.009107   64164 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 22:38:52.321882   64164 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 22:38:52.580796   64164 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 22:38:52.973514   64164 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 22:38:53.590747   64164 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 22:38:53.591466   64164 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 22:38:53.597314   64164 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 22:38:51.012295   64307 main.go:141] libmachine: (pause-056726) Calling .GetIP
	I1013 22:38:51.016095   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:51.016687   64307 main.go:141] libmachine: (pause-056726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:d4:33", ip: ""} in network mk-pause-056726: {Iface:virbr2 ExpiryTime:2025-10-13 23:37:09 +0000 UTC Type:0 Mac:52:54:00:7b:d4:33 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-056726 Clientid:01:52:54:00:7b:d4:33}
	I1013 22:38:51.016718   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined IP address 192.168.50.114 and MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:51.017048   64307 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1013 22:38:51.023670   64307 kubeadm.go:883] updating cluster {Name:pause-056726 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-056726 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:38:51.023832   64307 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:38:51.023891   64307 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:38:51.081614   64307 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:38:51.081644   64307 crio.go:433] Images already preloaded, skipping extraction
	I1013 22:38:51.081718   64307 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:38:51.130060   64307 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:38:51.130087   64307 cache_images.go:85] Images are preloaded, skipping loading
	I1013 22:38:51.130095   64307 kubeadm.go:934] updating node { 192.168.50.114 8443 v1.34.1 crio true true} ...
	I1013 22:38:51.130248   64307 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-056726 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-056726 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 22:38:51.130346   64307 ssh_runner.go:195] Run: crio config
	I1013 22:38:51.201189   64307 cni.go:84] Creating CNI manager for ""
	I1013 22:38:51.201222   64307 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1013 22:38:51.201242   64307 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 22:38:51.201267   64307 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.114 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-056726 NodeName:pause-056726 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:38:51.201429   64307 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-056726"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.114"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.114"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:38:51.201498   64307 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 22:38:51.217808   64307 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 22:38:51.217897   64307 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:38:51.233569   64307 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1013 22:38:51.261591   64307 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:38:51.287766   64307 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1013 22:38:51.316017   64307 ssh_runner.go:195] Run: grep 192.168.50.114	control-plane.minikube.internal$ /etc/hosts
	I1013 22:38:51.321143   64307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:38:51.572704   64307 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:38:51.643105   64307 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/pause-056726 for IP: 192.168.50.114
	I1013 22:38:51.643127   64307 certs.go:195] generating shared ca certs ...
	I1013 22:38:51.643172   64307 certs.go:227] acquiring lock for ca certs: {Name:mk571ab777fecb8fbabce6e1d2676cf4c099bd41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:38:51.643346   64307 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-15625/.minikube/ca.key
	I1013 22:38:51.643408   64307 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-15625/.minikube/proxy-client-ca.key
	I1013 22:38:51.643424   64307 certs.go:257] generating profile certs ...
	I1013 22:38:51.643550   64307 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/pause-056726/client.key
	I1013 22:38:51.643650   64307 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/pause-056726/apiserver.key.470e9060
	I1013 22:38:51.643709   64307 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/pause-056726/proxy-client.key
	I1013 22:38:51.643862   64307 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/19947.pem (1338 bytes)
	W1013 22:38:51.643922   64307 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-15625/.minikube/certs/19947_empty.pem, impossibly tiny 0 bytes
	I1013 22:38:51.643944   64307 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 22:38:51.643989   64307 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem (1078 bytes)
	I1013 22:38:51.644039   64307 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:38:51.644088   64307 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/key.pem (1675 bytes)
	I1013 22:38:51.644185   64307 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/files/etc/ssl/certs/199472.pem (1708 bytes)
	I1013 22:38:51.645127   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:38:51.767866   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 22:38:51.872623   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:38:51.962000   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 22:38:52.020524   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/pause-056726/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1013 22:38:52.106256   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/pause-056726/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1013 22:38:52.186178   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/pause-056726/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:38:52.253585   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/pause-056726/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1013 22:38:52.358197   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/files/etc/ssl/certs/199472.pem --> /usr/share/ca-certificates/199472.pem (1708 bytes)
	I1013 22:38:52.424688   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:38:52.471765   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/certs/19947.pem --> /usr/share/ca-certificates/19947.pem (1338 bytes)
	I1013 22:38:52.527060   64307 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:38:52.595263   64307 ssh_runner.go:195] Run: openssl version
	I1013 22:38:52.603719   64307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/199472.pem && ln -fs /usr/share/ca-certificates/199472.pem /etc/ssl/certs/199472.pem"
	I1013 22:38:52.624291   64307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/199472.pem
	I1013 22:38:52.630957   64307 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:27 /usr/share/ca-certificates/199472.pem
	I1013 22:38:52.631025   64307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/199472.pem
	I1013 22:38:52.639973   64307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/199472.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 22:38:52.654151   64307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:38:52.671610   64307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:38:52.678096   64307 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 21:18 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:38:52.678190   64307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:38:52.686913   64307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:38:52.703128   64307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19947.pem && ln -fs /usr/share/ca-certificates/19947.pem /etc/ssl/certs/19947.pem"
	I1013 22:38:52.733509   64307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19947.pem
	I1013 22:38:52.747790   64307 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:27 /usr/share/ca-certificates/19947.pem
	I1013 22:38:52.747855   64307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19947.pem
	I1013 22:38:52.762122   64307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19947.pem /etc/ssl/certs/51391683.0"
	I1013 22:38:52.795639   64307 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:38:52.802035   64307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1013 22:38:52.810138   64307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1013 22:38:52.818740   64307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1013 22:38:52.826691   64307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1013 22:38:52.835090   64307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1013 22:38:52.843652   64307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1013 22:38:52.852783   64307 kubeadm.go:400] StartCluster: {Name:pause-056726 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-056726 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:38:52.852934   64307 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:38:52.852998   64307 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:38:52.902942   64307 cri.go:89] found id: "1976935c4f01c7b9a13df7bb5d1d9ef512d248f7c51f7a17a8b7f01f5550a483"
	I1013 22:38:52.902969   64307 cri.go:89] found id: "7c29c423def7a994b132040a9614198e6a709fb14a87b4aacd14e813aa559ac8"
	I1013 22:38:52.902975   64307 cri.go:89] found id: "2da2442d80a23198b8938c1f85a9a443748c2b569431aed123dd840114bc725e"
	I1013 22:38:52.902980   64307 cri.go:89] found id: "46e601cd1b2a167997d7436a8e04ac20c370b61038e9b38abdbcafb3714df69a"
	I1013 22:38:52.902984   64307 cri.go:89] found id: "6eecfceb7178ca1572d2db0b0e0d133f998fef7c72f5be015811563a9c3b9ab7"
	I1013 22:38:52.902989   64307 cri.go:89] found id: "346a3bf45b515168f44c5eb17452a5999dc929d16bb03bfcb6b992a05d0e5953"
	I1013 22:38:52.902992   64307 cri.go:89] found id: "8341b5658a3dbfd304eee1bfcc1db60614f0dde6f2f0db558b10851d5bea38ab"
	I1013 22:38:52.902996   64307 cri.go:89] found id: "cc85e6bee7a15884026948a07a78f5832470b4fdf1803cf08249b1b207b9a86c"
	I1013 22:38:52.902999   64307 cri.go:89] found id: ""
	I1013 22:38:52.903071   64307 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-056726 -n pause-056726
helpers_test.go:269: (dbg) Run:  kubectl --context pause-056726 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-056726 -n pause-056726
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-056726 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-056726 logs -n 25: (1.638483277s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ start   │ -p running-upgrade-410631 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                  │ running-upgrade-410631    │ jenkins │ v1.37.0 │ 13 Oct 25 22:35 UTC │ 13 Oct 25 22:36 UTC │
	│ start   │ -p kubernetes-upgrade-766348 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                         │ kubernetes-upgrade-766348 │ jenkins │ v1.37.0 │ 13 Oct 25 22:35 UTC │                     │
	│ start   │ -p kubernetes-upgrade-766348 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                  │ kubernetes-upgrade-766348 │ jenkins │ v1.37.0 │ 13 Oct 25 22:35 UTC │ 13 Oct 25 22:37 UTC │
	│ ssh     │ -p NoKubernetes-794544 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-794544       │ jenkins │ v1.37.0 │ 13 Oct 25 22:36 UTC │                     │
	│ stop    │ -p NoKubernetes-794544                                                                                                                                                                                                                              │ NoKubernetes-794544       │ jenkins │ v1.37.0 │ 13 Oct 25 22:36 UTC │ 13 Oct 25 22:36 UTC │
	│ start   │ -p NoKubernetes-794544 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                          │ NoKubernetes-794544       │ jenkins │ v1.37.0 │ 13 Oct 25 22:36 UTC │ 13 Oct 25 22:36 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-694787 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                                                         │ stopped-upgrade-694787    │ jenkins │ v1.37.0 │ 13 Oct 25 22:36 UTC │                     │
	│ delete  │ -p stopped-upgrade-694787                                                                                                                                                                                                                           │ stopped-upgrade-694787    │ jenkins │ v1.37.0 │ 13 Oct 25 22:36 UTC │ 13 Oct 25 22:36 UTC │
	│ start   │ -p pause-056726 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                 │ pause-056726              │ jenkins │ v1.37.0 │ 13 Oct 25 22:36 UTC │ 13 Oct 25 22:38 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-410631 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                                                         │ running-upgrade-410631    │ jenkins │ v1.37.0 │ 13 Oct 25 22:36 UTC │                     │
	│ delete  │ -p running-upgrade-410631                                                                                                                                                                                                                           │ running-upgrade-410631    │ jenkins │ v1.37.0 │ 13 Oct 25 22:36 UTC │ 13 Oct 25 22:36 UTC │
	│ start   │ -p cert-expiration-591329 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                    │ cert-expiration-591329    │ jenkins │ v1.37.0 │ 13 Oct 25 22:36 UTC │ 13 Oct 25 22:37 UTC │
	│ ssh     │ -p NoKubernetes-794544 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-794544       │ jenkins │ v1.37.0 │ 13 Oct 25 22:36 UTC │                     │
	│ delete  │ -p NoKubernetes-794544                                                                                                                                                                                                                              │ NoKubernetes-794544       │ jenkins │ v1.37.0 │ 13 Oct 25 22:36 UTC │ 13 Oct 25 22:36 UTC │
	│ start   │ -p force-systemd-flag-331035 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                               │ force-systemd-flag-331035 │ jenkins │ v1.37.0 │ 13 Oct 25 22:36 UTC │ 13 Oct 25 22:38 UTC │
	│ delete  │ -p kubernetes-upgrade-766348                                                                                                                                                                                                                        │ kubernetes-upgrade-766348 │ jenkins │ v1.37.0 │ 13 Oct 25 22:37 UTC │ 13 Oct 25 22:37 UTC │
	│ start   │ -p cert-options-746983 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ cert-options-746983       │ jenkins │ v1.37.0 │ 13 Oct 25 22:37 UTC │ 13 Oct 25 22:38 UTC │
	│ ssh     │ force-systemd-flag-331035 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                                │ force-systemd-flag-331035 │ jenkins │ v1.37.0 │ 13 Oct 25 22:38 UTC │ 13 Oct 25 22:38 UTC │
	│ delete  │ -p force-systemd-flag-331035                                                                                                                                                                                                                        │ force-systemd-flag-331035 │ jenkins │ v1.37.0 │ 13 Oct 25 22:38 UTC │ 13 Oct 25 22:38 UTC │
	│ start   │ -p auto-851286 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                   │ auto-851286               │ jenkins │ v1.37.0 │ 13 Oct 25 22:38 UTC │                     │
	│ start   │ -p pause-056726 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                          │ pause-056726              │ jenkins │ v1.37.0 │ 13 Oct 25 22:38 UTC │ 13 Oct 25 22:39 UTC │
	│ ssh     │ cert-options-746983 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-746983       │ jenkins │ v1.37.0 │ 13 Oct 25 22:38 UTC │ 13 Oct 25 22:38 UTC │
	│ ssh     │ -p cert-options-746983 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-746983       │ jenkins │ v1.37.0 │ 13 Oct 25 22:38 UTC │ 13 Oct 25 22:38 UTC │
	│ delete  │ -p cert-options-746983                                                                                                                                                                                                                              │ cert-options-746983       │ jenkins │ v1.37.0 │ 13 Oct 25 22:38 UTC │ 13 Oct 25 22:38 UTC │
	│ start   │ -p flannel-851286 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                  │ flannel-851286            │ jenkins │ v1.37.0 │ 13 Oct 25 22:38 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 22:38:42
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 22:38:42.856352   64655 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:38:42.856626   64655 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:38:42.856636   64655 out.go:374] Setting ErrFile to fd 2...
	I1013 22:38:42.856640   64655 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:38:42.856811   64655 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-15625/.minikube/bin
	I1013 22:38:42.857330   64655 out.go:368] Setting JSON to false
	I1013 22:38:42.858309   64655 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":8471,"bootTime":1760386652,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 22:38:42.858420   64655 start.go:141] virtualization: kvm guest
	I1013 22:38:42.861162   64655 out.go:179] * [flannel-851286] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 22:38:42.862688   64655 notify.go:220] Checking for updates...
	I1013 22:38:42.862717   64655 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:38:42.864349   64655 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:38:42.865845   64655 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-15625/kubeconfig
	I1013 22:38:42.867071   64655 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-15625/.minikube
	I1013 22:38:42.868375   64655 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 22:38:42.869596   64655 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:38:42.871251   64655 config.go:182] Loaded profile config "auto-851286": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:38:42.871372   64655 config.go:182] Loaded profile config "cert-expiration-591329": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:38:42.871528   64655 config.go:182] Loaded profile config "pause-056726": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:38:42.871631   64655 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:38:42.909871   64655 out.go:179] * Using the kvm2 driver based on user configuration
	I1013 22:38:42.911332   64655 start.go:305] selected driver: kvm2
	I1013 22:38:42.911353   64655 start.go:925] validating driver "kvm2" against <nil>
	I1013 22:38:42.911366   64655 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:38:42.912093   64655 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:38:42.912177   64655 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-15625/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 22:38:42.926272   64655 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 22:38:42.926308   64655 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-15625/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 22:38:42.940181   64655 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 22:38:42.940217   64655 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 22:38:42.940516   64655 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:38:42.940546   64655 cni.go:84] Creating CNI manager for "flannel"
	I1013 22:38:42.940553   64655 start_flags.go:336] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1013 22:38:42.940594   64655 start.go:349] cluster config:
	{Name:flannel-851286 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:flannel-851286 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:38:42.940683   64655 iso.go:125] acquiring lock: {Name:mkb744e09089d0ab8a5ae3294003cf719d380bf8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:38:42.942553   64655 out.go:179] * Starting "flannel-851286" primary control-plane node in "flannel-851286" cluster
	I1013 22:38:39.953361   64164 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:38:39.953390   64164 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21724-15625/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-15625/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-15625/.minikube}
	I1013 22:38:39.953416   64164 buildroot.go:174] setting up certificates
	I1013 22:38:39.953440   64164 provision.go:84] configureAuth start
	I1013 22:38:39.953456   64164 main.go:141] libmachine: (auto-851286) Calling .GetMachineName
	I1013 22:38:39.953766   64164 main.go:141] libmachine: (auto-851286) Calling .GetIP
	I1013 22:38:39.957129   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:39.957695   64164 main.go:141] libmachine: (auto-851286) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:7c", ip: ""} in network mk-auto-851286: {Iface:virbr3 ExpiryTime:2025-10-13 23:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:7c Iaid: IPaddr:192.168.83.51 Prefix:24 Hostname:auto-851286 Clientid:01:52:54:00:1f:ac:7c}
	I1013 22:38:39.957724   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined IP address 192.168.83.51 and MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:39.958030   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHHostname
	I1013 22:38:39.961396   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:39.961782   64164 main.go:141] libmachine: (auto-851286) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:7c", ip: ""} in network mk-auto-851286: {Iface:virbr3 ExpiryTime:2025-10-13 23:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:7c Iaid: IPaddr:192.168.83.51 Prefix:24 Hostname:auto-851286 Clientid:01:52:54:00:1f:ac:7c}
	I1013 22:38:39.961799   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined IP address 192.168.83.51 and MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:39.962030   64164 provision.go:143] copyHostCerts
	I1013 22:38:39.962094   64164 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-15625/.minikube/ca.pem, removing ...
	I1013 22:38:39.962116   64164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-15625/.minikube/ca.pem
	I1013 22:38:39.962225   64164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-15625/.minikube/ca.pem (1078 bytes)
	I1013 22:38:39.962375   64164 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-15625/.minikube/cert.pem, removing ...
	I1013 22:38:39.962390   64164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-15625/.minikube/cert.pem
	I1013 22:38:39.962436   64164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-15625/.minikube/cert.pem (1123 bytes)
	I1013 22:38:39.962539   64164 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-15625/.minikube/key.pem, removing ...
	I1013 22:38:39.962553   64164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-15625/.minikube/key.pem
	I1013 22:38:39.962594   64164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-15625/.minikube/key.pem (1675 bytes)
	I1013 22:38:39.962687   64164 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-15625/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca-key.pem org=jenkins.auto-851286 san=[127.0.0.1 192.168.83.51 auto-851286 localhost minikube]
	I1013 22:38:40.244643   64164 provision.go:177] copyRemoteCerts
	I1013 22:38:40.244697   64164 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:38:40.244718   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHHostname
	I1013 22:38:40.248058   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:40.248511   64164 main.go:141] libmachine: (auto-851286) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:7c", ip: ""} in network mk-auto-851286: {Iface:virbr3 ExpiryTime:2025-10-13 23:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:7c Iaid: IPaddr:192.168.83.51 Prefix:24 Hostname:auto-851286 Clientid:01:52:54:00:1f:ac:7c}
	I1013 22:38:40.248540   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined IP address 192.168.83.51 and MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:40.248750   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHPort
	I1013 22:38:40.248964   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHKeyPath
	I1013 22:38:40.249211   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHUsername
	I1013 22:38:40.249380   64164 sshutil.go:53] new ssh client: &{IP:192.168.83.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/auto-851286/id_rsa Username:docker}
	I1013 22:38:40.344712   64164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1013 22:38:40.389329   64164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1013 22:38:40.423888   64164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1013 22:38:40.464194   64164 provision.go:87] duration metric: took 510.734588ms to configureAuth
	I1013 22:38:40.464235   64164 buildroot.go:189] setting minikube options for container-runtime
	I1013 22:38:40.464479   64164 config.go:182] Loaded profile config "auto-851286": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:38:40.464622   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHHostname
	I1013 22:38:40.468367   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:40.469049   64164 main.go:141] libmachine: (auto-851286) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:7c", ip: ""} in network mk-auto-851286: {Iface:virbr3 ExpiryTime:2025-10-13 23:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:7c Iaid: IPaddr:192.168.83.51 Prefix:24 Hostname:auto-851286 Clientid:01:52:54:00:1f:ac:7c}
	I1013 22:38:40.469086   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined IP address 192.168.83.51 and MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:40.469333   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHPort
	I1013 22:38:40.469580   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHKeyPath
	I1013 22:38:40.469760   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHKeyPath
	I1013 22:38:40.469922   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHUsername
	I1013 22:38:40.470086   64164 main.go:141] libmachine: Using SSH client type: native
	I1013 22:38:40.470321   64164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.83.51 22 <nil> <nil>}
	I1013 22:38:40.470337   64164 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:38:40.741468   64164 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:38:40.741492   64164 main.go:141] libmachine: Checking connection to Docker...
	I1013 22:38:40.741503   64164 main.go:141] libmachine: (auto-851286) Calling .GetURL
	I1013 22:38:40.743136   64164 main.go:141] libmachine: (auto-851286) DBG | using libvirt version 8000000
	I1013 22:38:40.746366   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:40.746798   64164 main.go:141] libmachine: (auto-851286) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:7c", ip: ""} in network mk-auto-851286: {Iface:virbr3 ExpiryTime:2025-10-13 23:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:7c Iaid: IPaddr:192.168.83.51 Prefix:24 Hostname:auto-851286 Clientid:01:52:54:00:1f:ac:7c}
	I1013 22:38:40.746830   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined IP address 192.168.83.51 and MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:40.747020   64164 main.go:141] libmachine: Docker is up and running!
	I1013 22:38:40.747038   64164 main.go:141] libmachine: Reticulating splines...
	I1013 22:38:40.747060   64164 client.go:171] duration metric: took 20.685647045s to LocalClient.Create
	I1013 22:38:40.747098   64164 start.go:167] duration metric: took 20.685746671s to libmachine.API.Create "auto-851286"
	I1013 22:38:40.747114   64164 start.go:293] postStartSetup for "auto-851286" (driver="kvm2")
	I1013 22:38:40.747125   64164 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:38:40.747151   64164 main.go:141] libmachine: (auto-851286) Calling .DriverName
	I1013 22:38:40.747438   64164 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:38:40.747468   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHHostname
	I1013 22:38:40.749975   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:40.750443   64164 main.go:141] libmachine: (auto-851286) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:7c", ip: ""} in network mk-auto-851286: {Iface:virbr3 ExpiryTime:2025-10-13 23:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:7c Iaid: IPaddr:192.168.83.51 Prefix:24 Hostname:auto-851286 Clientid:01:52:54:00:1f:ac:7c}
	I1013 22:38:40.750469   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined IP address 192.168.83.51 and MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:40.750631   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHPort
	I1013 22:38:40.750808   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHKeyPath
	I1013 22:38:40.750975   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHUsername
	I1013 22:38:40.751186   64164 sshutil.go:53] new ssh client: &{IP:192.168.83.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/auto-851286/id_rsa Username:docker}
	I1013 22:38:40.843488   64164 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 22:38:40.849898   64164 info.go:137] Remote host: Buildroot 2025.02
	I1013 22:38:40.849932   64164 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-15625/.minikube/addons for local assets ...
	I1013 22:38:40.850021   64164 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-15625/.minikube/files for local assets ...
	I1013 22:38:40.850133   64164 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-15625/.minikube/files/etc/ssl/certs/199472.pem -> 199472.pem in /etc/ssl/certs
	I1013 22:38:40.850273   64164 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 22:38:40.866413   64164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/files/etc/ssl/certs/199472.pem --> /etc/ssl/certs/199472.pem (1708 bytes)
	I1013 22:38:40.910325   64164 start.go:296] duration metric: took 163.195706ms for postStartSetup
	I1013 22:38:40.910372   64164 main.go:141] libmachine: (auto-851286) Calling .GetConfigRaw
	I1013 22:38:40.910965   64164 main.go:141] libmachine: (auto-851286) Calling .GetIP
	I1013 22:38:40.914257   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:40.914777   64164 main.go:141] libmachine: (auto-851286) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:7c", ip: ""} in network mk-auto-851286: {Iface:virbr3 ExpiryTime:2025-10-13 23:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:7c Iaid: IPaddr:192.168.83.51 Prefix:24 Hostname:auto-851286 Clientid:01:52:54:00:1f:ac:7c}
	I1013 22:38:40.914803   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined IP address 192.168.83.51 and MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:40.915146   64164 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/config.json ...
	I1013 22:38:40.916018   64164 start.go:128] duration metric: took 20.873203703s to createHost
	I1013 22:38:40.916051   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHHostname
	I1013 22:38:40.919356   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:40.919766   64164 main.go:141] libmachine: (auto-851286) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:7c", ip: ""} in network mk-auto-851286: {Iface:virbr3 ExpiryTime:2025-10-13 23:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:7c Iaid: IPaddr:192.168.83.51 Prefix:24 Hostname:auto-851286 Clientid:01:52:54:00:1f:ac:7c}
	I1013 22:38:40.919791   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined IP address 192.168.83.51 and MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:40.920006   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHPort
	I1013 22:38:40.920229   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHKeyPath
	I1013 22:38:40.920407   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHKeyPath
	I1013 22:38:40.920603   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHUsername
	I1013 22:38:40.920801   64164 main.go:141] libmachine: Using SSH client type: native
	I1013 22:38:40.921113   64164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.83.51 22 <nil> <nil>}
	I1013 22:38:40.921134   64164 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1013 22:38:41.032251   64164 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760395120.996421859
	
	I1013 22:38:41.032276   64164 fix.go:216] guest clock: 1760395120.996421859
	I1013 22:38:41.032286   64164 fix.go:229] Guest: 2025-10-13 22:38:40.996421859 +0000 UTC Remote: 2025-10-13 22:38:40.916037001 +0000 UTC m=+21.004007806 (delta=80.384858ms)
	I1013 22:38:41.032312   64164 fix.go:200] guest clock delta is within tolerance: 80.384858ms
	I1013 22:38:41.032318   64164 start.go:83] releasing machines lock for "auto-851286", held for 20.989617228s
	I1013 22:38:41.032346   64164 main.go:141] libmachine: (auto-851286) Calling .DriverName
	I1013 22:38:41.032667   64164 main.go:141] libmachine: (auto-851286) Calling .GetIP
	I1013 22:38:41.036080   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:41.036574   64164 main.go:141] libmachine: (auto-851286) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:7c", ip: ""} in network mk-auto-851286: {Iface:virbr3 ExpiryTime:2025-10-13 23:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:7c Iaid: IPaddr:192.168.83.51 Prefix:24 Hostname:auto-851286 Clientid:01:52:54:00:1f:ac:7c}
	I1013 22:38:41.036624   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined IP address 192.168.83.51 and MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:41.036829   64164 main.go:141] libmachine: (auto-851286) Calling .DriverName
	I1013 22:38:41.037476   64164 main.go:141] libmachine: (auto-851286) Calling .DriverName
	I1013 22:38:41.037681   64164 main.go:141] libmachine: (auto-851286) Calling .DriverName
	I1013 22:38:41.037815   64164 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 22:38:41.037864   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHHostname
	I1013 22:38:41.037957   64164 ssh_runner.go:195] Run: cat /version.json
	I1013 22:38:41.037997   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHHostname
	I1013 22:38:41.042243   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:41.042385   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:41.043030   64164 main.go:141] libmachine: (auto-851286) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:7c", ip: ""} in network mk-auto-851286: {Iface:virbr3 ExpiryTime:2025-10-13 23:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:7c Iaid: IPaddr:192.168.83.51 Prefix:24 Hostname:auto-851286 Clientid:01:52:54:00:1f:ac:7c}
	I1013 22:38:41.043087   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined IP address 192.168.83.51 and MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:41.043214   64164 main.go:141] libmachine: (auto-851286) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:7c", ip: ""} in network mk-auto-851286: {Iface:virbr3 ExpiryTime:2025-10-13 23:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:7c Iaid: IPaddr:192.168.83.51 Prefix:24 Hostname:auto-851286 Clientid:01:52:54:00:1f:ac:7c}
	I1013 22:38:41.043231   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined IP address 192.168.83.51 and MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:41.043570   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHPort
	I1013 22:38:41.043782   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHKeyPath
	I1013 22:38:41.043797   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHPort
	I1013 22:38:41.044108   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHKeyPath
	I1013 22:38:41.044148   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHUsername
	I1013 22:38:41.044269   64164 main.go:141] libmachine: (auto-851286) Calling .GetSSHUsername
	I1013 22:38:41.044375   64164 sshutil.go:53] new ssh client: &{IP:192.168.83.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/auto-851286/id_rsa Username:docker}
	I1013 22:38:41.044869   64164 sshutil.go:53] new ssh client: &{IP:192.168.83.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/auto-851286/id_rsa Username:docker}
	I1013 22:38:41.127087   64164 ssh_runner.go:195] Run: systemctl --version
	I1013 22:38:41.154041   64164 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 22:38:41.320547   64164 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 22:38:41.328073   64164 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 22:38:41.328153   64164 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 22:38:41.351687   64164 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1013 22:38:41.351715   64164 start.go:495] detecting cgroup driver to use...
	I1013 22:38:41.351792   64164 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 22:38:41.373410   64164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 22:38:41.395305   64164 docker.go:218] disabling cri-docker service (if available) ...
	I1013 22:38:41.395361   64164 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 22:38:41.420896   64164 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 22:38:41.445748   64164 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 22:38:41.640002   64164 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:38:41.904618   64164 docker.go:234] disabling docker service ...
	I1013 22:38:41.904697   64164 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:38:41.924294   64164 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:38:41.941657   64164 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:38:42.131924   64164 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:38:42.308865   64164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:38:42.329401   64164 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:38:42.360868   64164 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 22:38:42.360974   64164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:38:42.374486   64164 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 22:38:42.374553   64164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:38:42.388980   64164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:38:42.402641   64164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:38:42.422324   64164 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:38:42.437799   64164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:38:42.451860   64164 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:38:42.477122   64164 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:38:42.493402   64164 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:38:42.505688   64164 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1013 22:38:42.505757   64164 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1013 22:38:42.530017   64164 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:38:42.546001   64164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:38:42.701505   64164 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:38:42.835287   64164 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:38:42.835365   64164 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:38:42.842655   64164 start.go:563] Will wait 60s for crictl version
	I1013 22:38:42.842722   64164 ssh_runner.go:195] Run: which crictl
	I1013 22:38:42.848671   64164 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1013 22:38:42.904374   64164 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1013 22:38:42.904467   64164 ssh_runner.go:195] Run: crio --version
	I1013 22:38:42.939317   64164 ssh_runner.go:195] Run: crio --version
	I1013 22:38:42.977908   64164 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1013 22:38:41.057601   64307 out.go:252] * Updating the running kvm2 "pause-056726" VM ...
	I1013 22:38:41.057638   64307 machine.go:93] provisionDockerMachine start ...
	I1013 22:38:41.057654   64307 main.go:141] libmachine: (pause-056726) Calling .DriverName
	I1013 22:38:41.057844   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHHostname
	I1013 22:38:41.061178   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:41.061536   64307 main.go:141] libmachine: (pause-056726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:d4:33", ip: ""} in network mk-pause-056726: {Iface:virbr2 ExpiryTime:2025-10-13 23:37:09 +0000 UTC Type:0 Mac:52:54:00:7b:d4:33 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-056726 Clientid:01:52:54:00:7b:d4:33}
	I1013 22:38:41.061574   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined IP address 192.168.50.114 and MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:41.061741   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHPort
	I1013 22:38:41.061937   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:41.062110   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:41.062280   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHUsername
	I1013 22:38:41.062477   64307 main.go:141] libmachine: Using SSH client type: native
	I1013 22:38:41.062726   64307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I1013 22:38:41.062742   64307 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 22:38:41.186066   64307 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-056726
	
	I1013 22:38:41.186102   64307 main.go:141] libmachine: (pause-056726) Calling .GetMachineName
	I1013 22:38:41.186437   64307 buildroot.go:166] provisioning hostname "pause-056726"
	I1013 22:38:41.186470   64307 main.go:141] libmachine: (pause-056726) Calling .GetMachineName
	I1013 22:38:41.186698   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHHostname
	I1013 22:38:41.190353   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:41.190799   64307 main.go:141] libmachine: (pause-056726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:d4:33", ip: ""} in network mk-pause-056726: {Iface:virbr2 ExpiryTime:2025-10-13 23:37:09 +0000 UTC Type:0 Mac:52:54:00:7b:d4:33 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-056726 Clientid:01:52:54:00:7b:d4:33}
	I1013 22:38:41.190830   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined IP address 192.168.50.114 and MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:41.191002   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHPort
	I1013 22:38:41.191218   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:41.191395   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:41.191546   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHUsername
	I1013 22:38:41.191851   64307 main.go:141] libmachine: Using SSH client type: native
	I1013 22:38:41.192120   64307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I1013 22:38:41.192142   64307 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-056726 && echo "pause-056726" | sudo tee /etc/hostname
	I1013 22:38:41.336470   64307 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-056726
	
	I1013 22:38:41.336503   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHHostname
	I1013 22:38:41.340097   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:41.340706   64307 main.go:141] libmachine: (pause-056726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:d4:33", ip: ""} in network mk-pause-056726: {Iface:virbr2 ExpiryTime:2025-10-13 23:37:09 +0000 UTC Type:0 Mac:52:54:00:7b:d4:33 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-056726 Clientid:01:52:54:00:7b:d4:33}
	I1013 22:38:41.340753   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined IP address 192.168.50.114 and MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:41.341057   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHPort
	I1013 22:38:41.341297   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:41.341500   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:41.341718   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHUsername
	I1013 22:38:41.341910   64307 main.go:141] libmachine: Using SSH client type: native
	I1013 22:38:41.342221   64307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I1013 22:38:41.342262   64307 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-056726' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-056726/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-056726' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 22:38:41.465951   64307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:38:41.465998   64307 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21724-15625/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-15625/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-15625/.minikube}
	I1013 22:38:41.466022   64307 buildroot.go:174] setting up certificates
	I1013 22:38:41.466039   64307 provision.go:84] configureAuth start
	I1013 22:38:41.466058   64307 main.go:141] libmachine: (pause-056726) Calling .GetMachineName
	I1013 22:38:41.466350   64307 main.go:141] libmachine: (pause-056726) Calling .GetIP
	I1013 22:38:41.470586   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:41.471088   64307 main.go:141] libmachine: (pause-056726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:d4:33", ip: ""} in network mk-pause-056726: {Iface:virbr2 ExpiryTime:2025-10-13 23:37:09 +0000 UTC Type:0 Mac:52:54:00:7b:d4:33 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-056726 Clientid:01:52:54:00:7b:d4:33}
	I1013 22:38:41.471129   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined IP address 192.168.50.114 and MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:41.471590   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHHostname
	I1013 22:38:41.475221   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:41.475850   64307 main.go:141] libmachine: (pause-056726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:d4:33", ip: ""} in network mk-pause-056726: {Iface:virbr2 ExpiryTime:2025-10-13 23:37:09 +0000 UTC Type:0 Mac:52:54:00:7b:d4:33 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-056726 Clientid:01:52:54:00:7b:d4:33}
	I1013 22:38:41.475880   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined IP address 192.168.50.114 and MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:41.476182   64307 provision.go:143] copyHostCerts
	I1013 22:38:41.476251   64307 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-15625/.minikube/ca.pem, removing ...
	I1013 22:38:41.476272   64307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-15625/.minikube/ca.pem
	I1013 22:38:41.476339   64307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-15625/.minikube/ca.pem (1078 bytes)
	I1013 22:38:41.476489   64307 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-15625/.minikube/cert.pem, removing ...
	I1013 22:38:41.476505   64307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-15625/.minikube/cert.pem
	I1013 22:38:41.476543   64307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-15625/.minikube/cert.pem (1123 bytes)
	I1013 22:38:41.476636   64307 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-15625/.minikube/key.pem, removing ...
	I1013 22:38:41.476649   64307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-15625/.minikube/key.pem
	I1013 22:38:41.476681   64307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-15625/.minikube/key.pem (1675 bytes)
	I1013 22:38:41.476763   64307 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-15625/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca-key.pem org=jenkins.pause-056726 san=[127.0.0.1 192.168.50.114 localhost minikube pause-056726]
	I1013 22:38:41.976552   64307 provision.go:177] copyRemoteCerts
	I1013 22:38:41.976618   64307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:38:41.976659   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHHostname
	I1013 22:38:41.980446   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:41.980969   64307 main.go:141] libmachine: (pause-056726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:d4:33", ip: ""} in network mk-pause-056726: {Iface:virbr2 ExpiryTime:2025-10-13 23:37:09 +0000 UTC Type:0 Mac:52:54:00:7b:d4:33 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-056726 Clientid:01:52:54:00:7b:d4:33}
	I1013 22:38:41.980999   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined IP address 192.168.50.114 and MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:41.981297   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHPort
	I1013 22:38:41.981600   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:41.981786   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHUsername
	I1013 22:38:41.981995   64307 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/pause-056726/id_rsa Username:docker}
	I1013 22:38:42.080693   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1013 22:38:42.128691   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1013 22:38:42.168107   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 22:38:42.205857   64307 provision.go:87] duration metric: took 739.797808ms to configureAuth
	I1013 22:38:42.205917   64307 buildroot.go:189] setting minikube options for container-runtime
	I1013 22:38:42.206211   64307 config.go:182] Loaded profile config "pause-056726": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:38:42.206320   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHHostname
	I1013 22:38:42.213002   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:42.213603   64307 main.go:141] libmachine: (pause-056726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:d4:33", ip: ""} in network mk-pause-056726: {Iface:virbr2 ExpiryTime:2025-10-13 23:37:09 +0000 UTC Type:0 Mac:52:54:00:7b:d4:33 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-056726 Clientid:01:52:54:00:7b:d4:33}
	I1013 22:38:42.213636   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined IP address 192.168.50.114 and MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:42.213913   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHPort
	I1013 22:38:42.214121   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:42.214296   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:42.214418   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHUsername
	I1013 22:38:42.214664   64307 main.go:141] libmachine: Using SSH client type: native
	I1013 22:38:42.214890   64307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I1013 22:38:42.214910   64307 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:38:42.979251   64164 main.go:141] libmachine: (auto-851286) Calling .GetIP
	I1013 22:38:42.982309   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:42.982961   64164 main.go:141] libmachine: (auto-851286) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:7c", ip: ""} in network mk-auto-851286: {Iface:virbr3 ExpiryTime:2025-10-13 23:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:7c Iaid: IPaddr:192.168.83.51 Prefix:24 Hostname:auto-851286 Clientid:01:52:54:00:1f:ac:7c}
	I1013 22:38:42.982993   64164 main.go:141] libmachine: (auto-851286) DBG | domain auto-851286 has defined IP address 192.168.83.51 and MAC address 52:54:00:1f:ac:7c in network mk-auto-851286
	I1013 22:38:42.983271   64164 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1013 22:38:42.988581   64164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:38:43.005536   64164 kubeadm.go:883] updating cluster {Name:auto-851286 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:auto-851286 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.51 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:38:43.005631   64164 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:38:43.005677   64164 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:38:43.045223   64164 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1013 22:38:43.045327   64164 ssh_runner.go:195] Run: which lz4
	I1013 22:38:43.050212   64164 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1013 22:38:43.055524   64164 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1013 22:38:43.055559   64164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1013 22:38:44.726045   64164 crio.go:462] duration metric: took 1.675856504s to copy over tarball
	I1013 22:38:44.726111   64164 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1013 22:38:42.943940   64655 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:38:42.943991   64655 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-15625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1013 22:38:42.944015   64655 cache.go:58] Caching tarball of preloaded images
	I1013 22:38:42.944123   64655 preload.go:233] Found /home/jenkins/minikube-integration/21724-15625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1013 22:38:42.944137   64655 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 22:38:42.944280   64655 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/flannel-851286/config.json ...
	I1013 22:38:42.944307   64655 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/flannel-851286/config.json: {Name:mkc044b6dadf0bc28bca7c223da5e424b662028c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:38:42.944480   64655 start.go:360] acquireMachinesLock for flannel-851286: {Name:mk81e7d45b6c30d879e4077cd05b64f26ced767a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1013 22:38:48.109634   64655 start.go:364] duration metric: took 5.165105942s to acquireMachinesLock for "flannel-851286"
	I1013 22:38:48.109708   64655 start.go:93] Provisioning new machine with config: &{Name:flannel-851286 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.34.1 ClusterName:flannel-851286 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:38:48.109833   64655 start.go:125] createHost starting for "" (driver="kvm2")
	I1013 22:38:47.826384   64307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:38:47.826408   64307 machine.go:96] duration metric: took 6.768762066s to provisionDockerMachine
	I1013 22:38:47.826422   64307 start.go:293] postStartSetup for "pause-056726" (driver="kvm2")
	I1013 22:38:47.826434   64307 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:38:47.826454   64307 main.go:141] libmachine: (pause-056726) Calling .DriverName
	I1013 22:38:47.826830   64307 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:38:47.826862   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHHostname
	I1013 22:38:47.830452   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:47.830934   64307 main.go:141] libmachine: (pause-056726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:d4:33", ip: ""} in network mk-pause-056726: {Iface:virbr2 ExpiryTime:2025-10-13 23:37:09 +0000 UTC Type:0 Mac:52:54:00:7b:d4:33 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-056726 Clientid:01:52:54:00:7b:d4:33}
	I1013 22:38:47.830965   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined IP address 192.168.50.114 and MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:47.831171   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHPort
	I1013 22:38:47.831353   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:47.831505   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHUsername
	I1013 22:38:47.831701   64307 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/pause-056726/id_rsa Username:docker}
	I1013 22:38:47.923525   64307 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 22:38:47.929446   64307 info.go:137] Remote host: Buildroot 2025.02
	I1013 22:38:47.929471   64307 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-15625/.minikube/addons for local assets ...
	I1013 22:38:47.929552   64307 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-15625/.minikube/files for local assets ...
	I1013 22:38:47.929654   64307 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-15625/.minikube/files/etc/ssl/certs/199472.pem -> 199472.pem in /etc/ssl/certs
	I1013 22:38:47.929798   64307 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 22:38:47.945141   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/files/etc/ssl/certs/199472.pem --> /etc/ssl/certs/199472.pem (1708 bytes)
	I1013 22:38:47.982748   64307 start.go:296] duration metric: took 156.310071ms for postStartSetup
	I1013 22:38:47.982792   64307 fix.go:56] duration metric: took 6.95032763s for fixHost
	I1013 22:38:47.982816   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHHostname
	I1013 22:38:47.986308   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:47.986786   64307 main.go:141] libmachine: (pause-056726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:d4:33", ip: ""} in network mk-pause-056726: {Iface:virbr2 ExpiryTime:2025-10-13 23:37:09 +0000 UTC Type:0 Mac:52:54:00:7b:d4:33 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-056726 Clientid:01:52:54:00:7b:d4:33}
	I1013 22:38:47.986817   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined IP address 192.168.50.114 and MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:47.987066   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHPort
	I1013 22:38:47.987297   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:47.987484   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:47.987666   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHUsername
	I1013 22:38:47.987856   64307 main.go:141] libmachine: Using SSH client type: native
	I1013 22:38:47.988133   64307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I1013 22:38:47.988149   64307 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1013 22:38:48.109483   64307 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760395128.101107801
	
	I1013 22:38:48.109504   64307 fix.go:216] guest clock: 1760395128.101107801
	I1013 22:38:48.109512   64307 fix.go:229] Guest: 2025-10-13 22:38:48.101107801 +0000 UTC Remote: 2025-10-13 22:38:47.98279722 +0000 UTC m=+24.069035821 (delta=118.310581ms)
	I1013 22:38:48.109537   64307 fix.go:200] guest clock delta is within tolerance: 118.310581ms
	I1013 22:38:48.109544   64307 start.go:83] releasing machines lock for "pause-056726", held for 7.07711387s
	I1013 22:38:48.109575   64307 main.go:141] libmachine: (pause-056726) Calling .DriverName
	I1013 22:38:48.109858   64307 main.go:141] libmachine: (pause-056726) Calling .GetIP
	I1013 22:38:48.113678   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:48.114210   64307 main.go:141] libmachine: (pause-056726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:d4:33", ip: ""} in network mk-pause-056726: {Iface:virbr2 ExpiryTime:2025-10-13 23:37:09 +0000 UTC Type:0 Mac:52:54:00:7b:d4:33 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-056726 Clientid:01:52:54:00:7b:d4:33}
	I1013 22:38:48.114245   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined IP address 192.168.50.114 and MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:48.114431   64307 main.go:141] libmachine: (pause-056726) Calling .DriverName
	I1013 22:38:48.115054   64307 main.go:141] libmachine: (pause-056726) Calling .DriverName
	I1013 22:38:48.115281   64307 main.go:141] libmachine: (pause-056726) Calling .DriverName
	I1013 22:38:48.115402   64307 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 22:38:48.115455   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHHostname
	I1013 22:38:48.115585   64307 ssh_runner.go:195] Run: cat /version.json
	I1013 22:38:48.115610   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHHostname
	I1013 22:38:48.120256   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:48.120941   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:48.121395   64307 main.go:141] libmachine: (pause-056726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:d4:33", ip: ""} in network mk-pause-056726: {Iface:virbr2 ExpiryTime:2025-10-13 23:37:09 +0000 UTC Type:0 Mac:52:54:00:7b:d4:33 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-056726 Clientid:01:52:54:00:7b:d4:33}
	I1013 22:38:48.121420   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined IP address 192.168.50.114 and MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:48.121684   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHPort
	I1013 22:38:48.121714   64307 main.go:141] libmachine: (pause-056726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:d4:33", ip: ""} in network mk-pause-056726: {Iface:virbr2 ExpiryTime:2025-10-13 23:37:09 +0000 UTC Type:0 Mac:52:54:00:7b:d4:33 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-056726 Clientid:01:52:54:00:7b:d4:33}
	I1013 22:38:48.121840   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined IP address 192.168.50.114 and MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:48.122058   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHPort
	I1013 22:38:48.122212   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:48.122373   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHKeyPath
	I1013 22:38:48.122596   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHUsername
	I1013 22:38:48.122603   64307 main.go:141] libmachine: (pause-056726) Calling .GetSSHUsername
	I1013 22:38:48.122825   64307 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/pause-056726/id_rsa Username:docker}
	I1013 22:38:48.123254   64307 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/pause-056726/id_rsa Username:docker}
	I1013 22:38:48.209685   64307 ssh_runner.go:195] Run: systemctl --version
	I1013 22:38:48.237877   64307 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 22:38:48.485856   64307 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 22:38:48.496627   64307 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 22:38:48.496704   64307 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 22:38:48.510288   64307 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1013 22:38:48.510318   64307 start.go:495] detecting cgroup driver to use...
	I1013 22:38:48.510400   64307 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 22:38:48.539084   64307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 22:38:48.566554   64307 docker.go:218] disabling cri-docker service (if available) ...
	I1013 22:38:48.566613   64307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 22:38:48.596210   64307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 22:38:48.620854   64307 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 22:38:48.872388   64307 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:38:46.471242   64164 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.745103745s)
	I1013 22:38:46.471285   64164 crio.go:469] duration metric: took 1.745212485s to extract the tarball
	I1013 22:38:46.471308   64164 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1013 22:38:46.519200   64164 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:38:46.567577   64164 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:38:46.567608   64164 cache_images.go:85] Images are preloaded, skipping loading
	I1013 22:38:46.567619   64164 kubeadm.go:934] updating node { 192.168.83.51 8443 v1.34.1 crio true true} ...
	I1013 22:38:46.567737   64164 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-851286 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-851286 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 22:38:46.567814   64164 ssh_runner.go:195] Run: crio config
	I1013 22:38:46.615474   64164 cni.go:84] Creating CNI manager for ""
	I1013 22:38:46.615496   64164 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1013 22:38:46.615514   64164 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 22:38:46.615542   64164 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.51 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-851286 NodeName:auto-851286 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:38:46.615704   64164 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-851286"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.83.51"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.51"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:38:46.615784   64164 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 22:38:46.629422   64164 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 22:38:46.629481   64164 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:38:46.642075   64164 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I1013 22:38:46.663799   64164 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:38:46.685366   64164 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1013 22:38:46.707001   64164 ssh_runner.go:195] Run: grep 192.168.83.51	control-plane.minikube.internal$ /etc/hosts
	I1013 22:38:46.711491   64164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:38:46.726769   64164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:38:46.872142   64164 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:38:46.908760   64164 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286 for IP: 192.168.83.51
	I1013 22:38:46.908796   64164 certs.go:195] generating shared ca certs ...
	I1013 22:38:46.908816   64164 certs.go:227] acquiring lock for ca certs: {Name:mk571ab777fecb8fbabce6e1d2676cf4c099bd41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:38:46.909012   64164 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-15625/.minikube/ca.key
	I1013 22:38:46.909082   64164 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-15625/.minikube/proxy-client-ca.key
	I1013 22:38:46.909097   64164 certs.go:257] generating profile certs ...
	I1013 22:38:46.909218   64164 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/client.key
	I1013 22:38:46.909249   64164 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/client.crt with IP's: []
	I1013 22:38:47.264644   64164 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/client.crt ...
	I1013 22:38:47.264672   64164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/client.crt: {Name:mk96b7d53a24feef47e43abd0db56ae5e7c97ebb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:38:47.264896   64164 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/client.key ...
	I1013 22:38:47.264918   64164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/client.key: {Name:mk8dca3446bef38a09125c2861527c555dd12df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:38:47.265031   64164 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/apiserver.key.ad6549bb
	I1013 22:38:47.265053   64164 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/apiserver.crt.ad6549bb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.83.51]
	I1013 22:38:47.642563   64164 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/apiserver.crt.ad6549bb ...
	I1013 22:38:47.642589   64164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/apiserver.crt.ad6549bb: {Name:mk11487dccab40f9c41f7ba133963c305ed74ea0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:38:47.642786   64164 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/apiserver.key.ad6549bb ...
	I1013 22:38:47.642806   64164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/apiserver.key.ad6549bb: {Name:mkbb291d30accf2e11352db016fe7ab73ad18676 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:38:47.642920   64164 certs.go:382] copying /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/apiserver.crt.ad6549bb -> /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/apiserver.crt
	I1013 22:38:47.643021   64164 certs.go:386] copying /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/apiserver.key.ad6549bb -> /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/apiserver.key
	I1013 22:38:47.643081   64164 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/proxy-client.key
	I1013 22:38:47.643095   64164 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/proxy-client.crt with IP's: []
	I1013 22:38:47.932169   64164 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/proxy-client.crt ...
	I1013 22:38:47.932208   64164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/proxy-client.crt: {Name:mk4c59b0319d203b100d9c1f098dee25bdaa957e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:38:47.932368   64164 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/proxy-client.key ...
	I1013 22:38:47.932380   64164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/proxy-client.key: {Name:mk190c1fdba64fa7d3eb17d89b5a5eebf0a923be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:38:47.932551   64164 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/19947.pem (1338 bytes)
	W1013 22:38:47.932584   64164 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-15625/.minikube/certs/19947_empty.pem, impossibly tiny 0 bytes
	I1013 22:38:47.932594   64164 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 22:38:47.932614   64164 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem (1078 bytes)
	I1013 22:38:47.932636   64164 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:38:47.932657   64164 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/key.pem (1675 bytes)
	I1013 22:38:47.932693   64164 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/files/etc/ssl/certs/199472.pem (1708 bytes)
	I1013 22:38:47.933315   64164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:38:47.970268   64164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 22:38:48.012289   64164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:38:48.050071   64164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 22:38:48.084180   64164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1013 22:38:48.126334   64164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 22:38:48.161717   64164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:38:48.249510   64164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 22:38:48.298551   64164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:38:48.333488   64164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/certs/19947.pem --> /usr/share/ca-certificates/19947.pem (1338 bytes)
	I1013 22:38:48.366929   64164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/files/etc/ssl/certs/199472.pem --> /usr/share/ca-certificates/199472.pem (1708 bytes)
	I1013 22:38:48.414227   64164 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:38:48.438853   64164 ssh_runner.go:195] Run: openssl version
	I1013 22:38:48.446362   64164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:38:48.464050   64164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:38:48.471498   64164 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 21:18 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:38:48.471568   64164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:38:48.480456   64164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:38:48.500956   64164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19947.pem && ln -fs /usr/share/ca-certificates/19947.pem /etc/ssl/certs/19947.pem"
	I1013 22:38:48.521780   64164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19947.pem
	I1013 22:38:48.530204   64164 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:27 /usr/share/ca-certificates/19947.pem
	I1013 22:38:48.530271   64164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19947.pem
	I1013 22:38:48.541579   64164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19947.pem /etc/ssl/certs/51391683.0"
	I1013 22:38:48.568261   64164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/199472.pem && ln -fs /usr/share/ca-certificates/199472.pem /etc/ssl/certs/199472.pem"
	I1013 22:38:48.594755   64164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/199472.pem
	I1013 22:38:48.602350   64164 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:27 /usr/share/ca-certificates/199472.pem
	I1013 22:38:48.602416   64164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/199472.pem
	I1013 22:38:48.616235   64164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/199472.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 22:38:48.635009   64164 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:38:48.640966   64164 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 22:38:48.641035   64164 kubeadm.go:400] StartCluster: {Name:auto-851286 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Clu
sterName:auto-851286 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.51 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:38:48.641134   64164 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:38:48.641264   64164 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:38:48.691555   64164 cri.go:89] found id: ""
	I1013 22:38:48.691636   64164 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 22:38:48.711469   64164 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 22:38:48.725409   64164 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 22:38:48.739704   64164 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 22:38:48.739734   64164 kubeadm.go:157] found existing configuration files:
	
	I1013 22:38:48.739792   64164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 22:38:48.752777   64164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 22:38:48.752854   64164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 22:38:48.771237   64164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 22:38:48.787446   64164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 22:38:48.787514   64164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 22:38:48.804472   64164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 22:38:48.816949   64164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 22:38:48.817036   64164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 22:38:48.829845   64164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 22:38:48.842284   64164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 22:38:48.842359   64164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 22:38:48.857001   64164 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1013 22:38:48.925865   64164 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 22:38:48.925951   64164 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 22:38:49.045123   64164 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 22:38:49.045267   64164 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 22:38:49.045383   64164 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 22:38:49.062212   64164 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 22:38:49.240350   64164 out.go:252]   - Generating certificates and keys ...
	I1013 22:38:49.240442   64164 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 22:38:49.240493   64164 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 22:38:49.240558   64164 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 22:38:49.421424   64164 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 22:38:49.759626   64164 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 22:38:49.088960   64307 docker.go:234] disabling docker service ...
	I1013 22:38:49.089059   64307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:38:49.122978   64307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:38:49.142380   64307 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:38:49.345900   64307 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:38:49.582902   64307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:38:49.603147   64307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:38:49.634419   64307 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 22:38:49.634491   64307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:38:49.649208   64307 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 22:38:49.649288   64307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:38:49.682378   64307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:38:49.704376   64307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:38:49.758297   64307 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:38:49.787167   64307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:38:49.820057   64307 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:38:49.843948   64307 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:38:49.878037   64307 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:38:49.905531   64307 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:38:49.925073   64307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:38:50.298279   64307 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:38:50.864747   64307 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:38:50.864846   64307 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:38:50.873254   64307 start.go:563] Will wait 60s for crictl version
	I1013 22:38:50.873323   64307 ssh_runner.go:195] Run: which crictl
	I1013 22:38:50.880216   64307 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1013 22:38:50.931241   64307 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1013 22:38:50.931319   64307 ssh_runner.go:195] Run: crio --version
	I1013 22:38:50.968087   64307 ssh_runner.go:195] Run: crio --version
	I1013 22:38:51.010888   64307 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1013 22:38:48.236403   64655 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1013 22:38:48.236656   64655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 22:38:48.236749   64655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 22:38:48.256345   64655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41131
	I1013 22:38:48.256993   64655 main.go:141] libmachine: () Calling .GetVersion
	I1013 22:38:48.257682   64655 main.go:141] libmachine: Using API Version  1
	I1013 22:38:48.257717   64655 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 22:38:48.258151   64655 main.go:141] libmachine: () Calling .GetMachineName
	I1013 22:38:48.258342   64655 main.go:141] libmachine: (flannel-851286) Calling .GetMachineName
	I1013 22:38:48.258505   64655 main.go:141] libmachine: (flannel-851286) Calling .DriverName
	I1013 22:38:48.258684   64655 start.go:159] libmachine.API.Create for "flannel-851286" (driver="kvm2")
	I1013 22:38:48.258716   64655 client.go:168] LocalClient.Create starting
	I1013 22:38:48.258751   64655 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem
	I1013 22:38:48.258799   64655 main.go:141] libmachine: Decoding PEM data...
	I1013 22:38:48.258823   64655 main.go:141] libmachine: Parsing certificate...
	I1013 22:38:48.258937   64655 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-15625/.minikube/certs/cert.pem
	I1013 22:38:48.258969   64655 main.go:141] libmachine: Decoding PEM data...
	I1013 22:38:48.258984   64655 main.go:141] libmachine: Parsing certificate...
	I1013 22:38:48.259010   64655 main.go:141] libmachine: Running pre-create checks...
	I1013 22:38:48.259022   64655 main.go:141] libmachine: (flannel-851286) Calling .PreCreateCheck
	I1013 22:38:48.259450   64655 main.go:141] libmachine: (flannel-851286) Calling .GetConfigRaw
	I1013 22:38:48.259939   64655 main.go:141] libmachine: Creating machine...
	I1013 22:38:48.259952   64655 main.go:141] libmachine: (flannel-851286) Calling .Create
	I1013 22:38:48.260128   64655 main.go:141] libmachine: (flannel-851286) creating domain...
	I1013 22:38:48.260148   64655 main.go:141] libmachine: (flannel-851286) creating network...
	I1013 22:38:48.261921   64655 main.go:141] libmachine: (flannel-851286) DBG | found existing default network
	I1013 22:38:48.262092   64655 main.go:141] libmachine: (flannel-851286) DBG | <network connections='3'>
	I1013 22:38:48.262110   64655 main.go:141] libmachine: (flannel-851286) DBG |   <name>default</name>
	I1013 22:38:48.262132   64655 main.go:141] libmachine: (flannel-851286) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1013 22:38:48.262142   64655 main.go:141] libmachine: (flannel-851286) DBG |   <forward mode='nat'>
	I1013 22:38:48.262150   64655 main.go:141] libmachine: (flannel-851286) DBG |     <nat>
	I1013 22:38:48.262185   64655 main.go:141] libmachine: (flannel-851286) DBG |       <port start='1024' end='65535'/>
	I1013 22:38:48.262223   64655 main.go:141] libmachine: (flannel-851286) DBG |     </nat>
	I1013 22:38:48.262243   64655 main.go:141] libmachine: (flannel-851286) DBG |   </forward>
	I1013 22:38:48.262272   64655 main.go:141] libmachine: (flannel-851286) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1013 22:38:48.262287   64655 main.go:141] libmachine: (flannel-851286) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1013 22:38:48.262301   64655 main.go:141] libmachine: (flannel-851286) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1013 22:38:48.262307   64655 main.go:141] libmachine: (flannel-851286) DBG |     <dhcp>
	I1013 22:38:48.262317   64655 main.go:141] libmachine: (flannel-851286) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1013 22:38:48.262324   64655 main.go:141] libmachine: (flannel-851286) DBG |     </dhcp>
	I1013 22:38:48.262352   64655 main.go:141] libmachine: (flannel-851286) DBG |   </ip>
	I1013 22:38:48.262363   64655 main.go:141] libmachine: (flannel-851286) DBG | </network>
	I1013 22:38:48.262375   64655 main.go:141] libmachine: (flannel-851286) DBG | 
	I1013 22:38:48.263745   64655 main.go:141] libmachine: (flannel-851286) DBG | I1013 22:38:48.263542   64716 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000201390}
	I1013 22:38:48.263776   64655 main.go:141] libmachine: (flannel-851286) DBG | defining private network:
	I1013 22:38:48.263788   64655 main.go:141] libmachine: (flannel-851286) DBG | 
	I1013 22:38:48.263795   64655 main.go:141] libmachine: (flannel-851286) DBG | <network>
	I1013 22:38:48.263803   64655 main.go:141] libmachine: (flannel-851286) DBG |   <name>mk-flannel-851286</name>
	I1013 22:38:48.263809   64655 main.go:141] libmachine: (flannel-851286) DBG |   <dns enable='no'/>
	I1013 22:38:48.263818   64655 main.go:141] libmachine: (flannel-851286) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1013 22:38:48.263825   64655 main.go:141] libmachine: (flannel-851286) DBG |     <dhcp>
	I1013 22:38:48.263835   64655 main.go:141] libmachine: (flannel-851286) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1013 22:38:48.263841   64655 main.go:141] libmachine: (flannel-851286) DBG |     </dhcp>
	I1013 22:38:48.263849   64655 main.go:141] libmachine: (flannel-851286) DBG |   </ip>
	I1013 22:38:48.263856   64655 main.go:141] libmachine: (flannel-851286) DBG | </network>
	I1013 22:38:48.263864   64655 main.go:141] libmachine: (flannel-851286) DBG | 
	I1013 22:38:48.395487   64655 main.go:141] libmachine: (flannel-851286) DBG | creating private network mk-flannel-851286 192.168.39.0/24...
	I1013 22:38:48.521067   64655 main.go:141] libmachine: (flannel-851286) DBG | private network mk-flannel-851286 192.168.39.0/24 created
	I1013 22:38:48.521329   64655 main.go:141] libmachine: (flannel-851286) DBG | <network>
	I1013 22:38:48.521369   64655 main.go:141] libmachine: (flannel-851286) DBG |   <name>mk-flannel-851286</name>
	I1013 22:38:48.521382   64655 main.go:141] libmachine: (flannel-851286) setting up store path in /home/jenkins/minikube-integration/21724-15625/.minikube/machines/flannel-851286 ...
	I1013 22:38:48.521403   64655 main.go:141] libmachine: (flannel-851286) building disk image from file:///home/jenkins/minikube-integration/21724-15625/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1013 22:38:48.521417   64655 main.go:141] libmachine: (flannel-851286) DBG |   <uuid>43f9a973-809d-485b-96ce-d0273013c796</uuid>
	I1013 22:38:48.521425   64655 main.go:141] libmachine: (flannel-851286) DBG |   <bridge name='virbr4' stp='on' delay='0'/>
	I1013 22:38:48.521436   64655 main.go:141] libmachine: (flannel-851286) DBG |   <mac address='52:54:00:06:2d:52'/>
	I1013 22:38:48.521457   64655 main.go:141] libmachine: (flannel-851286) Downloading /home/jenkins/minikube-integration/21724-15625/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21724-15625/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I1013 22:38:48.521471   64655 main.go:141] libmachine: (flannel-851286) DBG |   <dns enable='no'/>
	I1013 22:38:48.521482   64655 main.go:141] libmachine: (flannel-851286) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1013 22:38:48.521493   64655 main.go:141] libmachine: (flannel-851286) DBG |     <dhcp>
	I1013 22:38:48.521505   64655 main.go:141] libmachine: (flannel-851286) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1013 22:38:48.521514   64655 main.go:141] libmachine: (flannel-851286) DBG |     </dhcp>
	I1013 22:38:48.521522   64655 main.go:141] libmachine: (flannel-851286) DBG |   </ip>
	I1013 22:38:48.521531   64655 main.go:141] libmachine: (flannel-851286) DBG | </network>
	I1013 22:38:48.521548   64655 main.go:141] libmachine: (flannel-851286) DBG | 
	I1013 22:38:48.521566   64655 main.go:141] libmachine: (flannel-851286) DBG | I1013 22:38:48.521324   64716 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21724-15625/.minikube
	I1013 22:38:49.760089   64655 main.go:141] libmachine: (flannel-851286) DBG | I1013 22:38:49.759950   64716 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21724-15625/.minikube/machines/flannel-851286/id_rsa...
	I1013 22:38:50.036591   64655 main.go:141] libmachine: (flannel-851286) DBG | I1013 22:38:50.036428   64716 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21724-15625/.minikube/machines/flannel-851286/flannel-851286.rawdisk...
	I1013 22:38:50.036628   64655 main.go:141] libmachine: (flannel-851286) DBG | Writing magic tar header
	I1013 22:38:50.036642   64655 main.go:141] libmachine: (flannel-851286) DBG | Writing SSH key tar header
	I1013 22:38:50.036655   64655 main.go:141] libmachine: (flannel-851286) DBG | I1013 22:38:50.036580   64716 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21724-15625/.minikube/machines/flannel-851286 ...
	I1013 22:38:50.036760   64655 main.go:141] libmachine: (flannel-851286) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-15625/.minikube/machines/flannel-851286
	I1013 22:38:50.036786   64655 main.go:141] libmachine: (flannel-851286) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-15625/.minikube/machines
	I1013 22:38:50.036800   64655 main.go:141] libmachine: (flannel-851286) setting executable bit set on /home/jenkins/minikube-integration/21724-15625/.minikube/machines/flannel-851286 (perms=drwx------)
	I1013 22:38:50.036831   64655 main.go:141] libmachine: (flannel-851286) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-15625/.minikube
	I1013 22:38:50.036873   64655 main.go:141] libmachine: (flannel-851286) setting executable bit set on /home/jenkins/minikube-integration/21724-15625/.minikube/machines (perms=drwxr-xr-x)
	I1013 22:38:50.036889   64655 main.go:141] libmachine: (flannel-851286) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21724-15625
	I1013 22:38:50.036905   64655 main.go:141] libmachine: (flannel-851286) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1013 22:38:50.036919   64655 main.go:141] libmachine: (flannel-851286) DBG | checking permissions on dir: /home/jenkins
	I1013 22:38:50.036929   64655 main.go:141] libmachine: (flannel-851286) DBG | checking permissions on dir: /home
	I1013 22:38:50.036936   64655 main.go:141] libmachine: (flannel-851286) DBG | skipping /home - not owner
	I1013 22:38:50.036980   64655 main.go:141] libmachine: (flannel-851286) setting executable bit set on /home/jenkins/minikube-integration/21724-15625/.minikube (perms=drwxr-xr-x)
	I1013 22:38:50.037002   64655 main.go:141] libmachine: (flannel-851286) setting executable bit set on /home/jenkins/minikube-integration/21724-15625 (perms=drwxrwxr-x)
	I1013 22:38:50.037018   64655 main.go:141] libmachine: (flannel-851286) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1013 22:38:50.037031   64655 main.go:141] libmachine: (flannel-851286) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1013 22:38:50.037046   64655 main.go:141] libmachine: (flannel-851286) defining domain...
	I1013 22:38:50.038631   64655 main.go:141] libmachine: (flannel-851286) defining domain using XML: 
	I1013 22:38:50.038648   64655 main.go:141] libmachine: (flannel-851286) <domain type='kvm'>
	I1013 22:38:50.038680   64655 main.go:141] libmachine: (flannel-851286)   <name>flannel-851286</name>
	I1013 22:38:50.038725   64655 main.go:141] libmachine: (flannel-851286)   <memory unit='MiB'>3072</memory>
	I1013 22:38:50.038737   64655 main.go:141] libmachine: (flannel-851286)   <vcpu>2</vcpu>
	I1013 22:38:50.038756   64655 main.go:141] libmachine: (flannel-851286)   <features>
	I1013 22:38:50.038766   64655 main.go:141] libmachine: (flannel-851286)     <acpi/>
	I1013 22:38:50.038773   64655 main.go:141] libmachine: (flannel-851286)     <apic/>
	I1013 22:38:50.038802   64655 main.go:141] libmachine: (flannel-851286)     <pae/>
	I1013 22:38:50.038815   64655 main.go:141] libmachine: (flannel-851286)   </features>
	I1013 22:38:50.038825   64655 main.go:141] libmachine: (flannel-851286)   <cpu mode='host-passthrough'>
	I1013 22:38:50.038836   64655 main.go:141] libmachine: (flannel-851286)   </cpu>
	I1013 22:38:50.038844   64655 main.go:141] libmachine: (flannel-851286)   <os>
	I1013 22:38:50.038858   64655 main.go:141] libmachine: (flannel-851286)     <type>hvm</type>
	I1013 22:38:50.038865   64655 main.go:141] libmachine: (flannel-851286)     <boot dev='cdrom'/>
	I1013 22:38:50.038881   64655 main.go:141] libmachine: (flannel-851286)     <boot dev='hd'/>
	I1013 22:38:50.038890   64655 main.go:141] libmachine: (flannel-851286)     <bootmenu enable='no'/>
	I1013 22:38:50.038902   64655 main.go:141] libmachine: (flannel-851286)   </os>
	I1013 22:38:50.038911   64655 main.go:141] libmachine: (flannel-851286)   <devices>
	I1013 22:38:50.038933   64655 main.go:141] libmachine: (flannel-851286)     <disk type='file' device='cdrom'>
	I1013 22:38:50.038951   64655 main.go:141] libmachine: (flannel-851286)       <source file='/home/jenkins/minikube-integration/21724-15625/.minikube/machines/flannel-851286/boot2docker.iso'/>
	I1013 22:38:50.038959   64655 main.go:141] libmachine: (flannel-851286)       <target dev='hdc' bus='scsi'/>
	I1013 22:38:50.038966   64655 main.go:141] libmachine: (flannel-851286)       <readonly/>
	I1013 22:38:50.038972   64655 main.go:141] libmachine: (flannel-851286)     </disk>
	I1013 22:38:50.039004   64655 main.go:141] libmachine: (flannel-851286)     <disk type='file' device='disk'>
	I1013 22:38:50.039030   64655 main.go:141] libmachine: (flannel-851286)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1013 22:38:50.039063   64655 main.go:141] libmachine: (flannel-851286)       <source file='/home/jenkins/minikube-integration/21724-15625/.minikube/machines/flannel-851286/flannel-851286.rawdisk'/>
	I1013 22:38:50.039077   64655 main.go:141] libmachine: (flannel-851286)       <target dev='hda' bus='virtio'/>
	I1013 22:38:50.039087   64655 main.go:141] libmachine: (flannel-851286)     </disk>
	I1013 22:38:50.039095   64655 main.go:141] libmachine: (flannel-851286)     <interface type='network'>
	I1013 22:38:50.039110   64655 main.go:141] libmachine: (flannel-851286)       <source network='mk-flannel-851286'/>
	I1013 22:38:50.039119   64655 main.go:141] libmachine: (flannel-851286)       <model type='virtio'/>
	I1013 22:38:50.039145   64655 main.go:141] libmachine: (flannel-851286)     </interface>
	I1013 22:38:50.039172   64655 main.go:141] libmachine: (flannel-851286)     <interface type='network'>
	I1013 22:38:50.039181   64655 main.go:141] libmachine: (flannel-851286)       <source network='default'/>
	I1013 22:38:50.039209   64655 main.go:141] libmachine: (flannel-851286)       <model type='virtio'/>
	I1013 22:38:50.039228   64655 main.go:141] libmachine: (flannel-851286)     </interface>
	I1013 22:38:50.039258   64655 main.go:141] libmachine: (flannel-851286)     <serial type='pty'>
	I1013 22:38:50.039269   64655 main.go:141] libmachine: (flannel-851286)       <target port='0'/>
	I1013 22:38:50.039278   64655 main.go:141] libmachine: (flannel-851286)     </serial>
	I1013 22:38:50.039303   64655 main.go:141] libmachine: (flannel-851286)     <console type='pty'>
	I1013 22:38:50.039312   64655 main.go:141] libmachine: (flannel-851286)       <target type='serial' port='0'/>
	I1013 22:38:50.039318   64655 main.go:141] libmachine: (flannel-851286)     </console>
	I1013 22:38:50.039326   64655 main.go:141] libmachine: (flannel-851286)     <rng model='virtio'>
	I1013 22:38:50.039334   64655 main.go:141] libmachine: (flannel-851286)       <backend model='random'>/dev/random</backend>
	I1013 22:38:50.039342   64655 main.go:141] libmachine: (flannel-851286)     </rng>
	I1013 22:38:50.039348   64655 main.go:141] libmachine: (flannel-851286)   </devices>
	I1013 22:38:50.039355   64655 main.go:141] libmachine: (flannel-851286) </domain>
	I1013 22:38:50.039361   64655 main.go:141] libmachine: (flannel-851286) 
	I1013 22:38:50.131891   64655 main.go:141] libmachine: (flannel-851286) DBG | domain flannel-851286 has defined MAC address 52:54:00:c6:f4:70 in network default
	I1013 22:38:50.132651   64655 main.go:141] libmachine: (flannel-851286) starting domain...
	I1013 22:38:50.132694   64655 main.go:141] libmachine: (flannel-851286) DBG | domain flannel-851286 has defined MAC address 52:54:00:49:d1:5b in network mk-flannel-851286
	I1013 22:38:50.132703   64655 main.go:141] libmachine: (flannel-851286) ensuring networks are active...
	I1013 22:38:50.133591   64655 main.go:141] libmachine: (flannel-851286) Ensuring network default is active
	I1013 22:38:50.134010   64655 main.go:141] libmachine: (flannel-851286) Ensuring network mk-flannel-851286 is active
	I1013 22:38:50.134894   64655 main.go:141] libmachine: (flannel-851286) getting domain XML...
	I1013 22:38:50.136194   64655 main.go:141] libmachine: (flannel-851286) DBG | starting domain XML:
	I1013 22:38:50.136218   64655 main.go:141] libmachine: (flannel-851286) DBG | <domain type='kvm'>
	I1013 22:38:50.136229   64655 main.go:141] libmachine: (flannel-851286) DBG |   <name>flannel-851286</name>
	I1013 22:38:50.136238   64655 main.go:141] libmachine: (flannel-851286) DBG |   <uuid>7ab29013-61ac-4ddf-a05f-47c403c9b522</uuid>
	I1013 22:38:50.136248   64655 main.go:141] libmachine: (flannel-851286) DBG |   <memory unit='KiB'>3145728</memory>
	I1013 22:38:50.136267   64655 main.go:141] libmachine: (flannel-851286) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1013 22:38:50.136278   64655 main.go:141] libmachine: (flannel-851286) DBG |   <vcpu placement='static'>2</vcpu>
	I1013 22:38:50.136283   64655 main.go:141] libmachine: (flannel-851286) DBG |   <os>
	I1013 22:38:50.136294   64655 main.go:141] libmachine: (flannel-851286) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1013 22:38:50.136301   64655 main.go:141] libmachine: (flannel-851286) DBG |     <boot dev='cdrom'/>
	I1013 22:38:50.136322   64655 main.go:141] libmachine: (flannel-851286) DBG |     <boot dev='hd'/>
	I1013 22:38:50.136329   64655 main.go:141] libmachine: (flannel-851286) DBG |     <bootmenu enable='no'/>
	I1013 22:38:50.136340   64655 main.go:141] libmachine: (flannel-851286) DBG |   </os>
	I1013 22:38:50.136356   64655 main.go:141] libmachine: (flannel-851286) DBG |   <features>
	I1013 22:38:50.136367   64655 main.go:141] libmachine: (flannel-851286) DBG |     <acpi/>
	I1013 22:38:50.136375   64655 main.go:141] libmachine: (flannel-851286) DBG |     <apic/>
	I1013 22:38:50.136387   64655 main.go:141] libmachine: (flannel-851286) DBG |     <pae/>
	I1013 22:38:50.136396   64655 main.go:141] libmachine: (flannel-851286) DBG |   </features>
	I1013 22:38:50.136407   64655 main.go:141] libmachine: (flannel-851286) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1013 22:38:50.136426   64655 main.go:141] libmachine: (flannel-851286) DBG |   <clock offset='utc'/>
	I1013 22:38:50.136438   64655 main.go:141] libmachine: (flannel-851286) DBG |   <on_poweroff>destroy</on_poweroff>
	I1013 22:38:50.136454   64655 main.go:141] libmachine: (flannel-851286) DBG |   <on_reboot>restart</on_reboot>
	I1013 22:38:50.136486   64655 main.go:141] libmachine: (flannel-851286) DBG |   <on_crash>destroy</on_crash>
	I1013 22:38:50.136509   64655 main.go:141] libmachine: (flannel-851286) DBG |   <devices>
	I1013 22:38:50.136521   64655 main.go:141] libmachine: (flannel-851286) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1013 22:38:50.136531   64655 main.go:141] libmachine: (flannel-851286) DBG |     <disk type='file' device='cdrom'>
	I1013 22:38:50.136542   64655 main.go:141] libmachine: (flannel-851286) DBG |       <driver name='qemu' type='raw'/>
	I1013 22:38:50.136558   64655 main.go:141] libmachine: (flannel-851286) DBG |       <source file='/home/jenkins/minikube-integration/21724-15625/.minikube/machines/flannel-851286/boot2docker.iso'/>
	I1013 22:38:50.136571   64655 main.go:141] libmachine: (flannel-851286) DBG |       <target dev='hdc' bus='scsi'/>
	I1013 22:38:50.136583   64655 main.go:141] libmachine: (flannel-851286) DBG |       <readonly/>
	I1013 22:38:50.136595   64655 main.go:141] libmachine: (flannel-851286) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1013 22:38:50.136606   64655 main.go:141] libmachine: (flannel-851286) DBG |     </disk>
	I1013 22:38:50.136615   64655 main.go:141] libmachine: (flannel-851286) DBG |     <disk type='file' device='disk'>
	I1013 22:38:50.136624   64655 main.go:141] libmachine: (flannel-851286) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1013 22:38:50.136637   64655 main.go:141] libmachine: (flannel-851286) DBG |       <source file='/home/jenkins/minikube-integration/21724-15625/.minikube/machines/flannel-851286/flannel-851286.rawdisk'/>
	I1013 22:38:50.136646   64655 main.go:141] libmachine: (flannel-851286) DBG |       <target dev='hda' bus='virtio'/>
	I1013 22:38:50.136661   64655 main.go:141] libmachine: (flannel-851286) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1013 22:38:50.136669   64655 main.go:141] libmachine: (flannel-851286) DBG |     </disk>
	I1013 22:38:50.136679   64655 main.go:141] libmachine: (flannel-851286) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1013 22:38:50.136688   64655 main.go:141] libmachine: (flannel-851286) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1013 22:38:50.136696   64655 main.go:141] libmachine: (flannel-851286) DBG |     </controller>
	I1013 22:38:50.136709   64655 main.go:141] libmachine: (flannel-851286) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1013 22:38:50.136719   64655 main.go:141] libmachine: (flannel-851286) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1013 22:38:50.136736   64655 main.go:141] libmachine: (flannel-851286) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1013 22:38:50.136746   64655 main.go:141] libmachine: (flannel-851286) DBG |     </controller>
	I1013 22:38:50.136757   64655 main.go:141] libmachine: (flannel-851286) DBG |     <interface type='network'>
	I1013 22:38:50.136767   64655 main.go:141] libmachine: (flannel-851286) DBG |       <mac address='52:54:00:49:d1:5b'/>
	I1013 22:38:50.136777   64655 main.go:141] libmachine: (flannel-851286) DBG |       <source network='mk-flannel-851286'/>
	I1013 22:38:50.136787   64655 main.go:141] libmachine: (flannel-851286) DBG |       <model type='virtio'/>
	I1013 22:38:50.136799   64655 main.go:141] libmachine: (flannel-851286) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1013 22:38:50.136812   64655 main.go:141] libmachine: (flannel-851286) DBG |     </interface>
	I1013 22:38:50.136824   64655 main.go:141] libmachine: (flannel-851286) DBG |     <interface type='network'>
	I1013 22:38:50.136837   64655 main.go:141] libmachine: (flannel-851286) DBG |       <mac address='52:54:00:c6:f4:70'/>
	I1013 22:38:50.136860   64655 main.go:141] libmachine: (flannel-851286) DBG |       <source network='default'/>
	I1013 22:38:50.136870   64655 main.go:141] libmachine: (flannel-851286) DBG |       <model type='virtio'/>
	I1013 22:38:50.136879   64655 main.go:141] libmachine: (flannel-851286) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1013 22:38:50.136894   64655 main.go:141] libmachine: (flannel-851286) DBG |     </interface>
	I1013 22:38:50.136912   64655 main.go:141] libmachine: (flannel-851286) DBG |     <serial type='pty'>
	I1013 22:38:50.136925   64655 main.go:141] libmachine: (flannel-851286) DBG |       <target type='isa-serial' port='0'>
	I1013 22:38:50.136943   64655 main.go:141] libmachine: (flannel-851286) DBG |         <model name='isa-serial'/>
	I1013 22:38:50.136954   64655 main.go:141] libmachine: (flannel-851286) DBG |       </target>
	I1013 22:38:50.136959   64655 main.go:141] libmachine: (flannel-851286) DBG |     </serial>
	I1013 22:38:50.136964   64655 main.go:141] libmachine: (flannel-851286) DBG |     <console type='pty'>
	I1013 22:38:50.136977   64655 main.go:141] libmachine: (flannel-851286) DBG |       <target type='serial' port='0'/>
	I1013 22:38:50.136985   64655 main.go:141] libmachine: (flannel-851286) DBG |     </console>
	I1013 22:38:50.136992   64655 main.go:141] libmachine: (flannel-851286) DBG |     <input type='mouse' bus='ps2'/>
	I1013 22:38:50.137000   64655 main.go:141] libmachine: (flannel-851286) DBG |     <input type='keyboard' bus='ps2'/>
	I1013 22:38:50.137007   64655 main.go:141] libmachine: (flannel-851286) DBG |     <audio id='1' type='none'/>
	I1013 22:38:50.137026   64655 main.go:141] libmachine: (flannel-851286) DBG |     <memballoon model='virtio'>
	I1013 22:38:50.137040   64655 main.go:141] libmachine: (flannel-851286) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1013 22:38:50.137053   64655 main.go:141] libmachine: (flannel-851286) DBG |     </memballoon>
	I1013 22:38:50.137062   64655 main.go:141] libmachine: (flannel-851286) DBG |     <rng model='virtio'>
	I1013 22:38:50.137074   64655 main.go:141] libmachine: (flannel-851286) DBG |       <backend model='random'>/dev/random</backend>
	I1013 22:38:50.137087   64655 main.go:141] libmachine: (flannel-851286) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1013 22:38:50.137099   64655 main.go:141] libmachine: (flannel-851286) DBG |     </rng>
	I1013 22:38:50.137109   64655 main.go:141] libmachine: (flannel-851286) DBG |   </devices>
	I1013 22:38:50.137119   64655 main.go:141] libmachine: (flannel-851286) DBG | </domain>
	I1013 22:38:50.137129   64655 main.go:141] libmachine: (flannel-851286) DBG | 
	I1013 22:38:51.640639   64655 main.go:141] libmachine: (flannel-851286) waiting for domain to start...
	I1013 22:38:51.642323   64655 main.go:141] libmachine: (flannel-851286) domain is now running
	I1013 22:38:51.642352   64655 main.go:141] libmachine: (flannel-851286) waiting for IP...
	I1013 22:38:51.643327   64655 main.go:141] libmachine: (flannel-851286) DBG | domain flannel-851286 has defined MAC address 52:54:00:49:d1:5b in network mk-flannel-851286
	I1013 22:38:51.644137   64655 main.go:141] libmachine: (flannel-851286) DBG | no network interface addresses found for domain flannel-851286 (source=lease)
	I1013 22:38:51.644175   64655 main.go:141] libmachine: (flannel-851286) DBG | trying to list again with source=arp
	I1013 22:38:51.644703   64655 main.go:141] libmachine: (flannel-851286) DBG | unable to find current IP address of domain flannel-851286 in network mk-flannel-851286 (interfaces detected: [])
	I1013 22:38:51.644755   64655 main.go:141] libmachine: (flannel-851286) DBG | I1013 22:38:51.644696   64716 retry.go:31] will retry after 255.96681ms: waiting for domain to come up
	I1013 22:38:51.902641   64655 main.go:141] libmachine: (flannel-851286) DBG | domain flannel-851286 has defined MAC address 52:54:00:49:d1:5b in network mk-flannel-851286
	I1013 22:38:51.903410   64655 main.go:141] libmachine: (flannel-851286) DBG | no network interface addresses found for domain flannel-851286 (source=lease)
	I1013 22:38:51.903435   64655 main.go:141] libmachine: (flannel-851286) DBG | trying to list again with source=arp
	I1013 22:38:51.903839   64655 main.go:141] libmachine: (flannel-851286) DBG | unable to find current IP address of domain flannel-851286 in network mk-flannel-851286 (interfaces detected: [])
	I1013 22:38:51.903872   64655 main.go:141] libmachine: (flannel-851286) DBG | I1013 22:38:51.903816   64716 retry.go:31] will retry after 290.474278ms: waiting for domain to come up
	I1013 22:38:52.196591   64655 main.go:141] libmachine: (flannel-851286) DBG | domain flannel-851286 has defined MAC address 52:54:00:49:d1:5b in network mk-flannel-851286
	I1013 22:38:52.197345   64655 main.go:141] libmachine: (flannel-851286) DBG | no network interface addresses found for domain flannel-851286 (source=lease)
	I1013 22:38:52.197376   64655 main.go:141] libmachine: (flannel-851286) DBG | trying to list again with source=arp
	I1013 22:38:52.197783   64655 main.go:141] libmachine: (flannel-851286) DBG | unable to find current IP address of domain flannel-851286 in network mk-flannel-851286 (interfaces detected: [])
	I1013 22:38:52.197815   64655 main.go:141] libmachine: (flannel-851286) DBG | I1013 22:38:52.197756   64716 retry.go:31] will retry after 318.393842ms: waiting for domain to come up
	I1013 22:38:52.518663   64655 main.go:141] libmachine: (flannel-851286) DBG | domain flannel-851286 has defined MAC address 52:54:00:49:d1:5b in network mk-flannel-851286
	I1013 22:38:52.519447   64655 main.go:141] libmachine: (flannel-851286) DBG | no network interface addresses found for domain flannel-851286 (source=lease)
	I1013 22:38:52.519479   64655 main.go:141] libmachine: (flannel-851286) DBG | trying to list again with source=arp
	I1013 22:38:52.519866   64655 main.go:141] libmachine: (flannel-851286) DBG | unable to find current IP address of domain flannel-851286 in network mk-flannel-851286 (interfaces detected: [])
	I1013 22:38:52.519939   64655 main.go:141] libmachine: (flannel-851286) DBG | I1013 22:38:52.519853   64716 retry.go:31] will retry after 485.032894ms: waiting for domain to come up
	I1013 22:38:50.001323   64164 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 22:38:50.248683   64164 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 22:38:50.248873   64164 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [auto-851286 localhost] and IPs [192.168.83.51 127.0.0.1 ::1]
	I1013 22:38:50.673686   64164 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 22:38:50.673877   64164 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [auto-851286 localhost] and IPs [192.168.83.51 127.0.0.1 ::1]
	I1013 22:38:51.128872   64164 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 22:38:51.327351   64164 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 22:38:51.963053   64164 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 22:38:51.963278   64164 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 22:38:52.009107   64164 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 22:38:52.321882   64164 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 22:38:52.580796   64164 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 22:38:52.973514   64164 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 22:38:53.590747   64164 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 22:38:53.591466   64164 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 22:38:53.597314   64164 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 22:38:51.012295   64307 main.go:141] libmachine: (pause-056726) Calling .GetIP
	I1013 22:38:51.016095   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:51.016687   64307 main.go:141] libmachine: (pause-056726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:d4:33", ip: ""} in network mk-pause-056726: {Iface:virbr2 ExpiryTime:2025-10-13 23:37:09 +0000 UTC Type:0 Mac:52:54:00:7b:d4:33 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:pause-056726 Clientid:01:52:54:00:7b:d4:33}
	I1013 22:38:51.016718   64307 main.go:141] libmachine: (pause-056726) DBG | domain pause-056726 has defined IP address 192.168.50.114 and MAC address 52:54:00:7b:d4:33 in network mk-pause-056726
	I1013 22:38:51.017048   64307 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1013 22:38:51.023670   64307 kubeadm.go:883] updating cluster {Name:pause-056726 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-056726 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:38:51.023832   64307 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:38:51.023891   64307 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:38:51.081614   64307 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:38:51.081644   64307 crio.go:433] Images already preloaded, skipping extraction
	I1013 22:38:51.081718   64307 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:38:51.130060   64307 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:38:51.130087   64307 cache_images.go:85] Images are preloaded, skipping loading
	I1013 22:38:51.130095   64307 kubeadm.go:934] updating node { 192.168.50.114 8443 v1.34.1 crio true true} ...
	I1013 22:38:51.130248   64307 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-056726 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-056726 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 22:38:51.130346   64307 ssh_runner.go:195] Run: crio config
	I1013 22:38:51.201189   64307 cni.go:84] Creating CNI manager for ""
	I1013 22:38:51.201222   64307 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1013 22:38:51.201242   64307 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 22:38:51.201267   64307 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.114 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-056726 NodeName:pause-056726 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:38:51.201429   64307 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-056726"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.114"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.114"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:38:51.201498   64307 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 22:38:51.217808   64307 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 22:38:51.217897   64307 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:38:51.233569   64307 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1013 22:38:51.261591   64307 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:38:51.287766   64307 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1013 22:38:51.316017   64307 ssh_runner.go:195] Run: grep 192.168.50.114	control-plane.minikube.internal$ /etc/hosts
	I1013 22:38:51.321143   64307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:38:51.572704   64307 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:38:51.643105   64307 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/pause-056726 for IP: 192.168.50.114
	I1013 22:38:51.643127   64307 certs.go:195] generating shared ca certs ...
	I1013 22:38:51.643172   64307 certs.go:227] acquiring lock for ca certs: {Name:mk571ab777fecb8fbabce6e1d2676cf4c099bd41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:38:51.643346   64307 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-15625/.minikube/ca.key
	I1013 22:38:51.643408   64307 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-15625/.minikube/proxy-client-ca.key
	I1013 22:38:51.643424   64307 certs.go:257] generating profile certs ...
	I1013 22:38:51.643550   64307 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/pause-056726/client.key
	I1013 22:38:51.643650   64307 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/pause-056726/apiserver.key.470e9060
	I1013 22:38:51.643709   64307 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/pause-056726/proxy-client.key
	I1013 22:38:51.643862   64307 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/19947.pem (1338 bytes)
	W1013 22:38:51.643922   64307 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-15625/.minikube/certs/19947_empty.pem, impossibly tiny 0 bytes
	I1013 22:38:51.643944   64307 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca-key.pem (1675 bytes)
	I1013 22:38:51.643989   64307 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/ca.pem (1078 bytes)
	I1013 22:38:51.644039   64307 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:38:51.644088   64307 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/certs/key.pem (1675 bytes)
	I1013 22:38:51.644185   64307 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-15625/.minikube/files/etc/ssl/certs/199472.pem (1708 bytes)
	I1013 22:38:51.645127   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:38:51.767866   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1013 22:38:51.872623   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:38:51.962000   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1013 22:38:52.020524   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/pause-056726/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1013 22:38:52.106256   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/pause-056726/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1013 22:38:52.186178   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/pause-056726/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:38:52.253585   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/pause-056726/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1013 22:38:52.358197   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/files/etc/ssl/certs/199472.pem --> /usr/share/ca-certificates/199472.pem (1708 bytes)
	I1013 22:38:52.424688   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:38:52.471765   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-15625/.minikube/certs/19947.pem --> /usr/share/ca-certificates/19947.pem (1338 bytes)
	I1013 22:38:52.527060   64307 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:38:52.595263   64307 ssh_runner.go:195] Run: openssl version
	I1013 22:38:52.603719   64307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/199472.pem && ln -fs /usr/share/ca-certificates/199472.pem /etc/ssl/certs/199472.pem"
	I1013 22:38:52.624291   64307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/199472.pem
	I1013 22:38:52.630957   64307 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:27 /usr/share/ca-certificates/199472.pem
	I1013 22:38:52.631025   64307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/199472.pem
	I1013 22:38:52.639973   64307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/199472.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 22:38:52.654151   64307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:38:52.671610   64307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:38:52.678096   64307 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 21:18 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:38:52.678190   64307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:38:52.686913   64307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:38:52.703128   64307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19947.pem && ln -fs /usr/share/ca-certificates/19947.pem /etc/ssl/certs/19947.pem"
	I1013 22:38:52.733509   64307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19947.pem
	I1013 22:38:52.747790   64307 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:27 /usr/share/ca-certificates/19947.pem
	I1013 22:38:52.747855   64307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19947.pem
	I1013 22:38:52.762122   64307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19947.pem /etc/ssl/certs/51391683.0"
	I1013 22:38:52.795639   64307 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:38:52.802035   64307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1013 22:38:52.810138   64307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1013 22:38:52.818740   64307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1013 22:38:52.826691   64307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1013 22:38:52.835090   64307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1013 22:38:52.843652   64307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1013 22:38:52.852783   64307 kubeadm.go:400] StartCluster: {Name:pause-056726 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-056726 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:38:52.852934   64307 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:38:52.852998   64307 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:38:52.902942   64307 cri.go:89] found id: "1976935c4f01c7b9a13df7bb5d1d9ef512d248f7c51f7a17a8b7f01f5550a483"
	I1013 22:38:52.902969   64307 cri.go:89] found id: "7c29c423def7a994b132040a9614198e6a709fb14a87b4aacd14e813aa559ac8"
	I1013 22:38:52.902975   64307 cri.go:89] found id: "2da2442d80a23198b8938c1f85a9a443748c2b569431aed123dd840114bc725e"
	I1013 22:38:52.902980   64307 cri.go:89] found id: "46e601cd1b2a167997d7436a8e04ac20c370b61038e9b38abdbcafb3714df69a"
	I1013 22:38:52.902984   64307 cri.go:89] found id: "6eecfceb7178ca1572d2db0b0e0d133f998fef7c72f5be015811563a9c3b9ab7"
	I1013 22:38:52.902989   64307 cri.go:89] found id: "346a3bf45b515168f44c5eb17452a5999dc929d16bb03bfcb6b992a05d0e5953"
	I1013 22:38:52.902992   64307 cri.go:89] found id: "8341b5658a3dbfd304eee1bfcc1db60614f0dde6f2f0db558b10851d5bea38ab"
	I1013 22:38:52.902996   64307 cri.go:89] found id: "cc85e6bee7a15884026948a07a78f5832470b4fdf1803cf08249b1b207b9a86c"
	I1013 22:38:52.902999   64307 cri.go:89] found id: ""
	I1013 22:38:52.903071   64307 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-056726 -n pause-056726
helpers_test.go:269: (dbg) Run:  kubectl --context pause-056726 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (66.68s)

                                                
                                    

Test pass (271/324)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.81
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.15
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 4.08
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.06
18 TestDownloadOnly/v1.34.1/DeleteAll 0.14
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.64
22 TestOffline 106.9
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 169.16
31 TestAddons/serial/GCPAuth/Namespaces 0.15
32 TestAddons/serial/GCPAuth/FakeCredentials 9.6
35 TestAddons/parallel/Registry 16.95
36 TestAddons/parallel/RegistryCreds 0.96
38 TestAddons/parallel/InspektorGadget 6.52
39 TestAddons/parallel/MetricsServer 6.12
41 TestAddons/parallel/CSI 53.02
42 TestAddons/parallel/Headlamp 20.01
43 TestAddons/parallel/CloudSpanner 6.7
44 TestAddons/parallel/LocalPath 12.25
45 TestAddons/parallel/NvidiaDevicePlugin 6.73
46 TestAddons/parallel/Yakd 12.55
48 TestAddons/StoppedEnableDisable 80.59
49 TestCertOptions 77.95
50 TestCertExpiration 281.56
52 TestForceSystemdFlag 84.11
53 TestForceSystemdEnv 43.31
55 TestKVMDriverInstallOrUpdate 0.51
59 TestErrorSpam/setup 42.21
60 TestErrorSpam/start 0.35
61 TestErrorSpam/status 0.77
62 TestErrorSpam/pause 1.73
63 TestErrorSpam/unpause 1.86
64 TestErrorSpam/stop 88.51
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 80.93
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 37.99
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.09
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.28
76 TestFunctional/serial/CacheCmd/cache/add_local 1.12
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.74
81 TestFunctional/serial/CacheCmd/cache/delete 0.09
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
86 TestFunctional/serial/LogsCmd 1.4
87 TestFunctional/serial/LogsFileCmd 1.39
88 TestFunctional/serial/InvalidService 4.7
90 TestFunctional/parallel/ConfigCmd 0.34
92 TestFunctional/parallel/DryRun 0.28
93 TestFunctional/parallel/InternationalLanguage 0.15
94 TestFunctional/parallel/StatusCmd 0.92
99 TestFunctional/parallel/AddonsCmd 0.12
102 TestFunctional/parallel/SSHCmd 0.41
103 TestFunctional/parallel/CpCmd 1.35
105 TestFunctional/parallel/FileSync 0.2
106 TestFunctional/parallel/CertSync 1.21
110 TestFunctional/parallel/NodeLabels 0.06
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.4
114 TestFunctional/parallel/License 0.4
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
126 TestFunctional/parallel/ProfileCmd/profile_list 0.38
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.34
128 TestFunctional/parallel/MountCmd/any-port 7.41
129 TestFunctional/parallel/MountCmd/specific-port 1.83
130 TestFunctional/parallel/MountCmd/VerifyCleanup 1.41
131 TestFunctional/parallel/Version/short 0.05
132 TestFunctional/parallel/Version/components 0.46
133 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
134 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
135 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
136 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
137 TestFunctional/parallel/ImageCommands/ImageBuild 2.96
138 TestFunctional/parallel/ImageCommands/Setup 0.39
139 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.33
140 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.9
141 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.03
142 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.53
143 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.83
145 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.56
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
149 TestFunctional/parallel/ServiceCmd/List 1.24
150 TestFunctional/parallel/ServiceCmd/JSONOutput 1.24
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 224.9
162 TestMultiControlPlane/serial/DeployApp 6.26
163 TestMultiControlPlane/serial/PingHostFromPods 1.29
164 TestMultiControlPlane/serial/AddWorkerNode 47.36
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.92
167 TestMultiControlPlane/serial/CopyFile 13.33
168 TestMultiControlPlane/serial/StopSecondaryNode 88.31
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.68
170 TestMultiControlPlane/serial/RestartSecondaryNode 34.05
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.02
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 391.42
173 TestMultiControlPlane/serial/DeleteSecondaryNode 19.29
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.67
175 TestMultiControlPlane/serial/StopCluster 248.5
176 TestMultiControlPlane/serial/RestartCluster 99.49
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.66
178 TestMultiControlPlane/serial/AddSecondaryNode 83.36
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.93
183 TestJSONOutput/start/Command 85.84
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.81
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.68
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 7.04
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.2
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 84.3
215 TestMountStart/serial/StartWithMountFirst 21.38
216 TestMountStart/serial/VerifyMountFirst 0.37
217 TestMountStart/serial/StartWithMountSecond 24.85
218 TestMountStart/serial/VerifyMountSecond 0.39
219 TestMountStart/serial/DeleteFirst 0.72
220 TestMountStart/serial/VerifyMountPostDelete 0.39
221 TestMountStart/serial/Stop 1.35
222 TestMountStart/serial/RestartStopped 20.57
223 TestMountStart/serial/VerifyMountPostStop 0.38
226 TestMultiNode/serial/FreshStart2Nodes 98.78
227 TestMultiNode/serial/DeployApp2Nodes 5.38
228 TestMultiNode/serial/PingHostFrom2Pods 0.79
229 TestMultiNode/serial/AddNode 43.38
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.61
232 TestMultiNode/serial/CopyFile 7.4
233 TestMultiNode/serial/StopNode 2.54
234 TestMultiNode/serial/StartAfterStop 37.76
235 TestMultiNode/serial/RestartKeepsNodes 296.72
236 TestMultiNode/serial/DeleteNode 2.9
237 TestMultiNode/serial/StopMultiNode 167.68
238 TestMultiNode/serial/RestartMultiNode 96.03
239 TestMultiNode/serial/ValidateNameConflict 42.71
246 TestScheduledStopUnix 109.68
250 TestRunningBinaryUpgrade 109.36
252 TestKubernetesUpgrade 241.79
256 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
259 TestNoKubernetes/serial/StartWithK8s 85.02
264 TestNetworkPlugins/group/false 3.06
268 TestStoppedBinaryUpgrade/Setup 0.68
269 TestStoppedBinaryUpgrade/Upgrade 161.83
270 TestNoKubernetes/serial/StartWithStopK8s 31.09
271 TestNoKubernetes/serial/Start 55.88
272 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
273 TestNoKubernetes/serial/ProfileList 9.39
274 TestNoKubernetes/serial/Stop 1.44
275 TestNoKubernetes/serial/StartNoArgs 37.8
276 TestStoppedBinaryUpgrade/MinikubeLogs 1.23
285 TestPause/serial/Start 101.88
286 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
287 TestNetworkPlugins/group/auto/Start 91.81
289 TestNetworkPlugins/group/flannel/Start 79.99
290 TestNetworkPlugins/group/enable-default-cni/Start 88.27
291 TestNetworkPlugins/group/auto/KubeletFlags 0.21
292 TestNetworkPlugins/group/auto/NetCatPod 11.24
293 TestNetworkPlugins/group/flannel/ControllerPod 6.01
294 TestNetworkPlugins/group/auto/DNS 0.15
295 TestNetworkPlugins/group/auto/Localhost 0.14
296 TestNetworkPlugins/group/auto/HairPin 0.15
297 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
298 TestNetworkPlugins/group/flannel/NetCatPod 11.3
299 TestNetworkPlugins/group/bridge/Start 89.81
300 TestNetworkPlugins/group/flannel/DNS 0.2
301 TestNetworkPlugins/group/flannel/Localhost 0.16
302 TestNetworkPlugins/group/flannel/HairPin 0.16
303 TestNetworkPlugins/group/calico/Start 74.91
304 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
305 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.3
306 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
307 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
308 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
309 TestNetworkPlugins/group/kindnet/Start 66.1
310 TestNetworkPlugins/group/custom-flannel/Start 92.76
311 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
312 TestNetworkPlugins/group/bridge/NetCatPod 12.28
313 TestNetworkPlugins/group/calico/ControllerPod 5.11
314 TestNetworkPlugins/group/calico/KubeletFlags 0.22
315 TestNetworkPlugins/group/calico/NetCatPod 11.9
316 TestNetworkPlugins/group/bridge/DNS 0.22
317 TestNetworkPlugins/group/bridge/Localhost 0.18
318 TestNetworkPlugins/group/bridge/HairPin 0.15
319 TestNetworkPlugins/group/calico/DNS 0.2
320 TestNetworkPlugins/group/calico/Localhost 0.18
321 TestNetworkPlugins/group/calico/HairPin 0.18
323 TestStartStop/group/old-k8s-version/serial/FirstStart 97.44
325 TestStartStop/group/no-preload/serial/FirstStart 124.08
326 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
327 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
328 TestNetworkPlugins/group/kindnet/NetCatPod 10.36
329 TestNetworkPlugins/group/kindnet/DNS 0.41
330 TestNetworkPlugins/group/kindnet/Localhost 0.16
331 TestNetworkPlugins/group/kindnet/HairPin 0.14
332 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
333 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.27
335 TestStartStop/group/embed-certs/serial/FirstStart 94.13
336 TestNetworkPlugins/group/custom-flannel/DNS 0.21
337 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
338 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
340 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 60.27
341 TestStartStop/group/old-k8s-version/serial/DeployApp 10.37
342 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.96
343 TestStartStop/group/old-k8s-version/serial/Stop 73.61
344 TestStartStop/group/no-preload/serial/DeployApp 10.3
345 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.3
346 TestStartStop/group/embed-certs/serial/DeployApp 10.26
347 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.05
348 TestStartStop/group/no-preload/serial/Stop 78.21
349 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.07
350 TestStartStop/group/default-k8s-diff-port/serial/Stop 75.17
351 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.96
352 TestStartStop/group/embed-certs/serial/Stop 89.37
353 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
354 TestStartStop/group/old-k8s-version/serial/SecondStart 43.93
355 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
356 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 47.83
357 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
358 TestStartStop/group/no-preload/serial/SecondStart 78.92
359 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 14.01
360 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.11
361 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
362 TestStartStop/group/embed-certs/serial/SecondStart 60.25
363 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
364 TestStartStop/group/old-k8s-version/serial/Pause 3.5
366 TestStartStop/group/newest-cni/serial/FirstStart 75.11
367 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 12.01
368 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
369 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.31
370 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.95
371 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 9.01
372 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 9.01
373 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
374 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
375 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.33
376 TestStartStop/group/no-preload/serial/Pause 3.2
377 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
378 TestStartStop/group/embed-certs/serial/Pause 3.27
379 TestStartStop/group/newest-cni/serial/DeployApp 0
380 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.95
381 TestStartStop/group/newest-cni/serial/Stop 7.45
382 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
383 TestStartStop/group/newest-cni/serial/SecondStart 36.46
384 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
385 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
386 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
387 TestStartStop/group/newest-cni/serial/Pause 3.32
x
+
TestDownloadOnly/v1.28.0/json-events (7.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-826870 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-826870 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (7.810107369s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1013 21:17:54.422414   19947 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1013 21:17:54.422533   19947 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-15625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-826870
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-826870: exit status 85 (60.739836ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-826870 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-826870 │ jenkins │ v1.37.0 │ 13 Oct 25 21:17 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 21:17:46
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 21:17:46.652879   19959 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:17:46.653126   19959 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:17:46.653135   19959 out.go:374] Setting ErrFile to fd 2...
	I1013 21:17:46.653139   19959 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:17:46.653349   19959 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-15625/.minikube/bin
	W1013 21:17:46.653487   19959 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21724-15625/.minikube/config/config.json: open /home/jenkins/minikube-integration/21724-15625/.minikube/config/config.json: no such file or directory
	I1013 21:17:46.653942   19959 out.go:368] Setting JSON to true
	I1013 21:17:46.654849   19959 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3615,"bootTime":1760386652,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 21:17:46.654929   19959 start.go:141] virtualization: kvm guest
	I1013 21:17:46.657041   19959 out.go:99] [download-only-826870] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 21:17:46.657205   19959 notify.go:220] Checking for updates...
	W1013 21:17:46.657214   19959 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21724-15625/.minikube/cache/preloaded-tarball: no such file or directory
	I1013 21:17:46.658427   19959 out.go:171] MINIKUBE_LOCATION=21724
	I1013 21:17:46.659796   19959 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 21:17:46.660994   19959 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21724-15625/kubeconfig
	I1013 21:17:46.662391   19959 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-15625/.minikube
	I1013 21:17:46.663597   19959 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1013 21:17:46.665732   19959 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1013 21:17:46.665941   19959 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 21:17:47.170321   19959 out.go:99] Using the kvm2 driver based on user configuration
	I1013 21:17:47.170354   19959 start.go:305] selected driver: kvm2
	I1013 21:17:47.170360   19959 start.go:925] validating driver "kvm2" against <nil>
	I1013 21:17:47.170685   19959 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 21:17:47.170812   19959 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-15625/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 21:17:47.186247   19959 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 21:17:47.186289   19959 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21724-15625/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1013 21:17:47.199295   19959 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1013 21:17:47.199340   19959 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 21:17:47.199867   19959 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1013 21:17:47.200031   19959 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1013 21:17:47.200054   19959 cni.go:84] Creating CNI manager for ""
	I1013 21:17:47.200099   19959 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1013 21:17:47.200110   19959 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1013 21:17:47.200152   19959 start.go:349] cluster config:
	{Name:download-only-826870 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-826870 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:17:47.200346   19959 iso.go:125] acquiring lock: {Name:mkb744e09089d0ab8a5ae3294003cf719d380bf8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 21:17:47.202228   19959 out.go:99] Downloading VM boot image ...
	I1013 21:17:47.202264   19959 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21724-15625/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1013 21:17:50.436966   19959 out.go:99] Starting "download-only-826870" primary control-plane node in "download-only-826870" cluster
	I1013 21:17:50.436995   19959 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1013 21:17:50.457504   19959 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1013 21:17:50.457543   19959 cache.go:58] Caching tarball of preloaded images
	I1013 21:17:50.457731   19959 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1013 21:17:50.459930   19959 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1013 21:17:50.459957   19959 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1013 21:17:50.486709   19959 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1013 21:17:50.486829   19959 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21724-15625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-826870 host does not exist
	  To start a cluster, run: "minikube start -p download-only-826870"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-826870
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (4.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-938231 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-938231 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (4.078475833s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (4.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1013 21:17:58.845859   19947 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1013 21:17:58.845911   19947 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-15625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-938231
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-938231: exit status 85 (60.239571ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-826870 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-826870 │ jenkins │ v1.37.0 │ 13 Oct 25 21:17 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                               │ minikube             │ jenkins │ v1.37.0 │ 13 Oct 25 21:17 UTC │ 13 Oct 25 21:17 UTC │
	│ delete  │ -p download-only-826870                                                                                                                                                                             │ download-only-826870 │ jenkins │ v1.37.0 │ 13 Oct 25 21:17 UTC │ 13 Oct 25 21:17 UTC │
	│ start   │ -o=json --download-only -p download-only-938231 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-938231 │ jenkins │ v1.37.0 │ 13 Oct 25 21:17 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 21:17:54
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 21:17:54.806288   20183 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:17:54.806492   20183 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:17:54.806500   20183 out.go:374] Setting ErrFile to fd 2...
	I1013 21:17:54.806504   20183 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:17:54.806664   20183 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-15625/.minikube/bin
	I1013 21:17:54.807119   20183 out.go:368] Setting JSON to true
	I1013 21:17:54.807928   20183 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3623,"bootTime":1760386652,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 21:17:54.808013   20183 start.go:141] virtualization: kvm guest
	I1013 21:17:54.809680   20183 out.go:99] [download-only-938231] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 21:17:54.809802   20183 notify.go:220] Checking for updates...
	I1013 21:17:54.811218   20183 out.go:171] MINIKUBE_LOCATION=21724
	I1013 21:17:54.812764   20183 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 21:17:54.814285   20183 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21724-15625/kubeconfig
	I1013 21:17:54.815631   20183 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-15625/.minikube
	I1013 21:17:54.816901   20183 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-938231 host does not exist
	  To start a cluster, run: "minikube start -p download-only-938231"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-938231
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.64s)

                                                
                                                
=== RUN   TestBinaryMirror
I1013 21:17:59.443276   19947 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-576009 --alsologtostderr --binary-mirror http://127.0.0.1:46135 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
helpers_test.go:175: Cleaning up "binary-mirror-576009" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-576009
--- PASS: TestBinaryMirror (0.64s)

                                                
                                    
x
+
TestOffline (106.9s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-787300 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-787300 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m45.947672424s)
helpers_test.go:175: Cleaning up "offline-crio-787300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-787300
--- PASS: TestOffline (106.90s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-323324
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-323324: exit status 85 (50.628148ms)

                                                
                                                
-- stdout --
	* Profile "addons-323324" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-323324"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-323324
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-323324: exit status 85 (52.257388ms)

                                                
                                                
-- stdout --
	* Profile "addons-323324" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-323324"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (169.16s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-323324 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-323324 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m49.155405235s)
--- PASS: TestAddons/Setup (169.16s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-323324 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-323324 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.6s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-323324 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-323324 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [30611480-4cab-4670-840b-c6b0d2f9f7ea] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [30611480-4cab-4670-840b-c6b0d2f9f7ea] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.005663038s
addons_test.go:694: (dbg) Run:  kubectl --context addons-323324 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-323324 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-323324 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.60s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 9.684679ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-n6l2x" [bfe55504-a420-43d7-8ce8-5e3ac252cb0a] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.006313748s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-l6gn2" [7956ef83-7889-4dd9-90e1-84cc5079dd16] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003729832s
addons_test.go:392: (dbg) Run:  kubectl --context addons-323324 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-323324 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-323324 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.94556566s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-323324 ip
2025/10/13 21:21:23 [DEBUG] GET http://192.168.39.156:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323324 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.95s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.96s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 9.883148ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-323324
addons_test.go:332: (dbg) Run:  kubectl --context addons-323324 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323324 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.96s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.52s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-w7k84" [80cdcf0d-89ad-4fec-bb90-68a707dc90c4] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.007326969s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323324 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.52s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.12s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 9.523831ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-9l7cd" [a4b023ce-1b41-417f-9b68-195c1d98b084] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.007131656s
addons_test.go:463: (dbg) Run:  kubectl --context addons-323324 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323324 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-323324 addons disable metrics-server --alsologtostderr -v=1: (1.037133383s)
--- PASS: TestAddons/parallel/MetricsServer (6.12s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.02s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1013 21:21:13.926282   19947 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1013 21:21:13.935024   19947 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1013 21:21:13.935051   19947 kapi.go:107] duration metric: took 8.789258ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 8.799042ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-323324 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323324 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323324 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323324 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323324 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323324 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323324 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323324 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323324 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323324 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323324 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-323324 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [93d4ab09-9ab9-457e-ba18-79e22edc0d7d] Pending
helpers_test.go:352: "task-pv-pod" [93d4ab09-9ab9-457e-ba18-79e22edc0d7d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [93d4ab09-9ab9-457e-ba18-79e22edc0d7d] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.011857758s
addons_test.go:572: (dbg) Run:  kubectl --context addons-323324 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-323324 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-323324 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-323324 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-323324 delete pod task-pv-pod: (1.086644133s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-323324 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-323324 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323324 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323324 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323324 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323324 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323324 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323324 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323324 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323324 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323324 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323324 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323324 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323324 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323324 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-323324 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [d84d5a56-a2d5-472c-ba7c-78bed74cb840] Pending
helpers_test.go:352: "task-pv-pod-restore" [d84d5a56-a2d5-472c-ba7c-78bed74cb840] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [d84d5a56-a2d5-472c-ba7c-78bed74cb840] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005129464s
addons_test.go:614: (dbg) Run:  kubectl --context addons-323324 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-323324 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-323324 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323324 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323324 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-323324 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.084839652s)
--- PASS: TestAddons/parallel/CSI (53.02s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-323324 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-sdpjb" [1a81e03a-ad50-4a7f-b543-3eb8d110cb9f] Pending
helpers_test.go:352: "headlamp-6945c6f4d-sdpjb" [1a81e03a-ad50-4a7f-b543-3eb8d110cb9f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-sdpjb" [1a81e03a-ad50-4a7f-b543-3eb8d110cb9f] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.233098088s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323324 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-323324 addons disable headlamp --alsologtostderr -v=1: (6.821299463s)
--- PASS: TestAddons/parallel/Headlamp (20.01s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-g6glp" [afa8bba8-015d-4127-8a95-c2f7983809b6] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004831083s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323324 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.70s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (12.25s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-323324 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-323324 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323324 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323324 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323324 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323324 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323324 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323324 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323324 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323324 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [b63f9df0-eaaa-49ec-ab62-d6b1623fb49d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [b63f9df0-eaaa-49ec-ab62-d6b1623fb49d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [b63f9df0-eaaa-49ec-ab62-d6b1623fb49d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.005310617s
addons_test.go:967: (dbg) Run:  kubectl --context addons-323324 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-323324 ssh "cat /opt/local-path-provisioner/pvc-b4ec6a44-54cb-4cec-ad26-77ce732a0da9_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-323324 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-323324 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323324 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (12.25s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.73s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-4hznp" [a270c687-0bcb-46d7-8ef1-81523f6ef017] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004980552s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323324 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.73s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-jl2ls" [1de96654-d417-42b5-bb98-b4f10c0ff75a] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.029137976s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323324 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-323324 addons disable yakd --alsologtostderr -v=1: (6.523381471s)
--- PASS: TestAddons/parallel/Yakd (12.55s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (80.59s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-323324
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-323324: (1m20.309582891s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-323324
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-323324
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-323324
--- PASS: TestAddons/StoppedEnableDisable (80.59s)

                                                
                                    
x
+
TestCertOptions (77.95s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-746983 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-746983 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m16.602680021s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-746983 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-746983 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-746983 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-746983" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-746983
--- PASS: TestCertOptions (77.95s)

                                                
                                    
x
+
TestCertExpiration (281.56s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-591329 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-591329 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m6.17766319s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-591329 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-591329 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (34.330326117s)
helpers_test.go:175: Cleaning up "cert-expiration-591329" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-591329
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-591329: (1.046380979s)
--- PASS: TestCertExpiration (281.56s)

                                                
                                    
x
+
TestForceSystemdFlag (84.11s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-331035 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-331035 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m23.006904955s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-331035 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-331035" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-331035
--- PASS: TestForceSystemdFlag (84.11s)

                                                
                                    
x
+
TestForceSystemdEnv (43.31s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-815659 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-815659 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (42.438262923s)
helpers_test.go:175: Cleaning up "force-systemd-env-815659" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-815659
--- PASS: TestForceSystemdEnv (43.31s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0.51s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1013 22:36:50.313518   19947 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1013 22:36:50.313639   19947 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate4046071333/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1013 22:36:50.345243   19947 install.go:163] /tmp/TestKVMDriverInstallOrUpdate4046071333/001/docker-machine-driver-kvm2 version is 1.1.1
W1013 22:36:50.345276   19947 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W1013 22:36:50.345392   19947 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1013 22:36:50.345431   19947 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4046071333/001/docker-machine-driver-kvm2
I1013 22:36:50.692707   19947 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate4046071333/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1013 22:36:50.709144   19947 install.go:163] /tmp/TestKVMDriverInstallOrUpdate4046071333/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (0.51s)

                                                
                                    
x
+
TestErrorSpam/setup (42.21s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-455348 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-455348 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1013 21:25:49.951365   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:25:49.957852   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:25:49.969230   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:25:49.990626   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:25:50.032105   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:25:50.113608   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:25:50.275227   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:25:50.596722   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:25:51.239060   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:25:52.521395   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:25:55.084294   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:26:00.205818   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-455348 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-455348 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (42.211916073s)
--- PASS: TestErrorSpam/setup (42.21s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-455348 --log_dir /tmp/nospam-455348 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-455348 --log_dir /tmp/nospam-455348 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-455348 --log_dir /tmp/nospam-455348 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-455348 --log_dir /tmp/nospam-455348 status
E1013 21:26:10.447926   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-455348 --log_dir /tmp/nospam-455348 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-455348 --log_dir /tmp/nospam-455348 status
--- PASS: TestErrorSpam/status (0.77s)

                                                
                                    
x
+
TestErrorSpam/pause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-455348 --log_dir /tmp/nospam-455348 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-455348 --log_dir /tmp/nospam-455348 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-455348 --log_dir /tmp/nospam-455348 pause
--- PASS: TestErrorSpam/pause (1.73s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.86s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-455348 --log_dir /tmp/nospam-455348 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-455348 --log_dir /tmp/nospam-455348 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-455348 --log_dir /tmp/nospam-455348 unpause
--- PASS: TestErrorSpam/unpause (1.86s)

                                                
                                    
x
+
TestErrorSpam/stop (88.51s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-455348 --log_dir /tmp/nospam-455348 stop
E1013 21:26:30.929967   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:27:11.892583   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-455348 --log_dir /tmp/nospam-455348 stop: (1m25.737323668s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-455348 --log_dir /tmp/nospam-455348 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-455348 --log_dir /tmp/nospam-455348 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-455348 --log_dir /tmp/nospam-455348 stop: (1.8781097s)
--- PASS: TestErrorSpam/stop (88.51s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21724-15625/.minikube/files/etc/test/nested/copy/19947/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.93s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-613120 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1013 21:28:33.814032   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-613120 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m20.933775827s)
--- PASS: TestFunctional/serial/StartWithProxy (80.93s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.99s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1013 21:29:04.481568   19947 config.go:182] Loaded profile config "functional-613120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-613120 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-613120 --alsologtostderr -v=8: (37.989523758s)
functional_test.go:678: soft start took 37.990292315s for "functional-613120" cluster.
I1013 21:29:42.471467   19947 config.go:182] Loaded profile config "functional-613120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (37.99s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-613120 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-613120 cache add registry.k8s.io/pause:3.1: (1.073447747s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-613120 cache add registry.k8s.io/pause:3.3: (1.117099963s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-613120 cache add registry.k8s.io/pause:latest: (1.090834329s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-613120 /tmp/TestFunctionalserialCacheCmdcacheadd_local2158272770/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 cache add minikube-local-cache-test:functional-613120
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 cache delete minikube-local-cache-test:functional-613120
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-613120
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-613120 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (219.279243ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-613120 cache reload: (1.025306812s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.74s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 kubectl -- --context functional-613120 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-613120 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-613120 logs: (1.402044754s)
--- PASS: TestFunctional/serial/LogsCmd (1.40s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 logs --file /tmp/TestFunctionalserialLogsFileCmd3157987300/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-613120 logs --file /tmp/TestFunctionalserialLogsFileCmd3157987300/001/logs.txt: (1.391442476s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.39s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.7s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-613120 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-613120
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-613120: exit status 115 (299.29189ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.113:32060 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-613120 delete -f testdata/invalidsvc.yaml
E1013 21:35:49.942787   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2332: (dbg) Done: kubectl --context functional-613120 delete -f testdata/invalidsvc.yaml: (1.195895086s)
--- PASS: TestFunctional/serial/InvalidService (4.70s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-613120 config get cpus: exit status 14 (54.988089ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-613120 config get cpus: exit status 14 (54.620081ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-613120 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-613120 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (139.102577ms)

                                                
                                                
-- stdout --
	* [functional-613120] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-15625/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-15625/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:35:51.366068   28090 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:35:51.366404   28090 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:35:51.366415   28090 out.go:374] Setting ErrFile to fd 2...
	I1013 21:35:51.366419   28090 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:35:51.366725   28090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-15625/.minikube/bin
	I1013 21:35:51.367241   28090 out.go:368] Setting JSON to false
	I1013 21:35:51.368414   28090 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4699,"bootTime":1760386652,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 21:35:51.368522   28090 start.go:141] virtualization: kvm guest
	I1013 21:35:51.370825   28090 out.go:179] * [functional-613120] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 21:35:51.372406   28090 notify.go:220] Checking for updates...
	I1013 21:35:51.372417   28090 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 21:35:51.374129   28090 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 21:35:51.375830   28090 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-15625/kubeconfig
	I1013 21:35:51.377318   28090 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-15625/.minikube
	I1013 21:35:51.378808   28090 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 21:35:51.380097   28090 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 21:35:51.381931   28090 config.go:182] Loaded profile config "functional-613120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:35:51.382598   28090 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:35:51.382673   28090 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:35:51.401708   28090 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33863
	I1013 21:35:51.402223   28090 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:35:51.402752   28090 main.go:141] libmachine: Using API Version  1
	I1013 21:35:51.402777   28090 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:35:51.403199   28090 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:35:51.403404   28090 main.go:141] libmachine: (functional-613120) Calling .DriverName
	I1013 21:35:51.403682   28090 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 21:35:51.403962   28090 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:35:51.404001   28090 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:35:51.417509   28090 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37263
	I1013 21:35:51.418017   28090 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:35:51.418544   28090 main.go:141] libmachine: Using API Version  1
	I1013 21:35:51.418565   28090 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:35:51.418871   28090 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:35:51.419063   28090 main.go:141] libmachine: (functional-613120) Calling .DriverName
	I1013 21:35:51.450478   28090 out.go:179] * Using the kvm2 driver based on existing profile
	I1013 21:35:51.453364   28090 start.go:305] selected driver: kvm2
	I1013 21:35:51.453383   28090 start.go:925] validating driver "kvm2" against &{Name:functional-613120 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-613120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.113 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:35:51.453516   28090 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 21:35:51.455845   28090 out.go:203] 
	W1013 21:35:51.457070   28090 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1013 21:35:51.458285   28090 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-613120 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-613120 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-613120 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (150.536314ms)

                                                
                                                
-- stdout --
	* [functional-613120] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-15625/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-15625/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:35:51.223501   28030 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:35:51.223598   28030 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:35:51.223602   28030 out.go:374] Setting ErrFile to fd 2...
	I1013 21:35:51.223606   28030 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:35:51.223877   28030 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-15625/.minikube/bin
	I1013 21:35:51.224333   28030 out.go:368] Setting JSON to false
	I1013 21:35:51.225218   28030 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4699,"bootTime":1760386652,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 21:35:51.225314   28030 start.go:141] virtualization: kvm guest
	I1013 21:35:51.227247   28030 out.go:179] * [functional-613120] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1013 21:35:51.228771   28030 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 21:35:51.228763   28030 notify.go:220] Checking for updates...
	I1013 21:35:51.231023   28030 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 21:35:51.232467   28030 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-15625/kubeconfig
	I1013 21:35:51.233812   28030 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-15625/.minikube
	I1013 21:35:51.235086   28030 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 21:35:51.236405   28030 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 21:35:51.238452   28030 config.go:182] Loaded profile config "functional-613120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:35:51.239105   28030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:35:51.239192   28030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:35:51.257840   28030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46469
	I1013 21:35:51.258241   28030 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:35:51.258732   28030 main.go:141] libmachine: Using API Version  1
	I1013 21:35:51.258750   28030 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:35:51.259117   28030 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:35:51.259339   28030 main.go:141] libmachine: (functional-613120) Calling .DriverName
	I1013 21:35:51.259608   28030 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 21:35:51.260052   28030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:35:51.260122   28030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:35:51.277460   28030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42457
	I1013 21:35:51.277853   28030 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:35:51.278309   28030 main.go:141] libmachine: Using API Version  1
	I1013 21:35:51.278343   28030 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:35:51.278667   28030 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:35:51.278852   28030 main.go:141] libmachine: (functional-613120) Calling .DriverName
	I1013 21:35:51.311715   28030 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1013 21:35:51.313340   28030 start.go:305] selected driver: kvm2
	I1013 21:35:51.313355   28030 start.go:925] validating driver "kvm2" against &{Name:functional-613120 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-613120 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.113 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:35:51.313429   28030 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 21:35:51.315365   28030 out.go:203] 
	W1013 21:35:51.316790   28030 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1013 21:35:51.318020   28030 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 ssh -n functional-613120 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 cp functional-613120:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2602273253/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 ssh -n functional-613120 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 ssh -n functional-613120 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/19947/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 ssh "sudo cat /etc/test/nested/copy/19947/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/19947.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 ssh "sudo cat /etc/ssl/certs/19947.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/19947.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 ssh "sudo cat /usr/share/ca-certificates/19947.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/199472.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 ssh "sudo cat /etc/ssl/certs/199472.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/199472.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 ssh "sudo cat /usr/share/ca-certificates/199472.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-613120 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-613120 ssh "sudo systemctl is-active docker": exit status 1 (200.524229ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-613120 ssh "sudo systemctl is-active containerd": exit status 1 (195.828401ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "327.06217ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "54.625249ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "292.916822ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "45.464258ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-613120 /tmp/TestFunctionalparallelMountCmdany-port605649670/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760391586990722248" to /tmp/TestFunctionalparallelMountCmdany-port605649670/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760391586990722248" to /tmp/TestFunctionalparallelMountCmdany-port605649670/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760391586990722248" to /tmp/TestFunctionalparallelMountCmdany-port605649670/001/test-1760391586990722248
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-613120 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (194.813846ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1013 21:39:47.185833   19947 retry.go:31] will retry after 565.697473ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 13 21:39 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 13 21:39 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 13 21:39 test-1760391586990722248
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 ssh cat /mount-9p/test-1760391586990722248
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-613120 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [c40a2d9a-c334-4b51-8f13-dd88c18eed33] Pending
helpers_test.go:352: "busybox-mount" [c40a2d9a-c334-4b51-8f13-dd88c18eed33] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [c40a2d9a-c334-4b51-8f13-dd88c18eed33] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [c40a2d9a-c334-4b51-8f13-dd88c18eed33] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003063738s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-613120 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-613120 /tmp/TestFunctionalparallelMountCmdany-port605649670/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-613120 /tmp/TestFunctionalparallelMountCmdspecific-port846528820/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-613120 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (189.369711ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1013 21:39:54.592319   19947 retry.go:31] will retry after 661.796333ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-613120 /tmp/TestFunctionalparallelMountCmdspecific-port846528820/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-613120 ssh "sudo umount -f /mount-9p": exit status 1 (190.732542ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-613120 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-613120 /tmp/TestFunctionalparallelMountCmdspecific-port846528820/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-613120 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2881587289/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-613120 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2881587289/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-613120 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2881587289/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-613120 ssh "findmnt -T" /mount1: exit status 1 (207.280302ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1013 21:39:56.442989   19947 retry.go:31] will retry after 590.114139ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-613120 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-613120 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2881587289/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-613120 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2881587289/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-613120 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2881587289/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-613120 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-613120
localhost/kicbase/echo-server:functional-613120
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-613120 image ls --format short --alsologtostderr:
I1013 21:40:54.350044   31234 out.go:360] Setting OutFile to fd 1 ...
I1013 21:40:54.350293   31234 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 21:40:54.350303   31234 out.go:374] Setting ErrFile to fd 2...
I1013 21:40:54.350307   31234 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 21:40:54.350548   31234 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-15625/.minikube/bin
I1013 21:40:54.351139   31234 config.go:182] Loaded profile config "functional-613120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 21:40:54.351260   31234 config.go:182] Loaded profile config "functional-613120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 21:40:54.351652   31234 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1013 21:40:54.351724   31234 main.go:141] libmachine: Launching plugin server for driver kvm2
I1013 21:40:54.364831   31234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35089
I1013 21:40:54.365277   31234 main.go:141] libmachine: () Calling .GetVersion
I1013 21:40:54.365801   31234 main.go:141] libmachine: Using API Version  1
I1013 21:40:54.365827   31234 main.go:141] libmachine: () Calling .SetConfigRaw
I1013 21:40:54.366188   31234 main.go:141] libmachine: () Calling .GetMachineName
I1013 21:40:54.366414   31234 main.go:141] libmachine: (functional-613120) Calling .GetState
I1013 21:40:54.368410   31234 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1013 21:40:54.368456   31234 main.go:141] libmachine: Launching plugin server for driver kvm2
I1013 21:40:54.381287   31234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33323
I1013 21:40:54.381715   31234 main.go:141] libmachine: () Calling .GetVersion
I1013 21:40:54.382144   31234 main.go:141] libmachine: Using API Version  1
I1013 21:40:54.382175   31234 main.go:141] libmachine: () Calling .SetConfigRaw
I1013 21:40:54.382453   31234 main.go:141] libmachine: () Calling .GetMachineName
I1013 21:40:54.382625   31234 main.go:141] libmachine: (functional-613120) Calling .DriverName
I1013 21:40:54.382786   31234 ssh_runner.go:195] Run: systemctl --version
I1013 21:40:54.382811   31234 main.go:141] libmachine: (functional-613120) Calling .GetSSHHostname
I1013 21:40:54.385462   31234 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined MAC address 52:54:00:9f:28:1e in network mk-functional-613120
I1013 21:40:54.385885   31234 main.go:141] libmachine: (functional-613120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:28:1e", ip: ""} in network mk-functional-613120: {Iface:virbr1 ExpiryTime:2025-10-13 22:27:59 +0000 UTC Type:0 Mac:52:54:00:9f:28:1e Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-613120 Clientid:01:52:54:00:9f:28:1e}
I1013 21:40:54.385922   31234 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined IP address 192.168.39.113 and MAC address 52:54:00:9f:28:1e in network mk-functional-613120
I1013 21:40:54.386075   31234 main.go:141] libmachine: (functional-613120) Calling .GetSSHPort
I1013 21:40:54.386241   31234 main.go:141] libmachine: (functional-613120) Calling .GetSSHKeyPath
I1013 21:40:54.386387   31234 main.go:141] libmachine: (functional-613120) Calling .GetSSHUsername
I1013 21:40:54.386554   31234 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/functional-613120/id_rsa Username:docker}
I1013 21:40:54.472174   31234 ssh_runner.go:195] Run: sudo crictl images --output json
I1013 21:40:54.512070   31234 main.go:141] libmachine: Making call to close driver server
I1013 21:40:54.512082   31234 main.go:141] libmachine: (functional-613120) Calling .Close
I1013 21:40:54.512362   31234 main.go:141] libmachine: Successfully made call to close driver server
I1013 21:40:54.512381   31234 main.go:141] libmachine: Making call to close connection to plugin binary
I1013 21:40:54.512389   31234 main.go:141] libmachine: (functional-613120) DBG | Closing plugin on server side
I1013 21:40:54.512391   31234 main.go:141] libmachine: Making call to close driver server
I1013 21:40:54.512402   31234 main.go:141] libmachine: (functional-613120) Calling .Close
I1013 21:40:54.512635   31234 main.go:141] libmachine: Successfully made call to close driver server
I1013 21:40:54.512658   31234 main.go:141] libmachine: (functional-613120) DBG | Closing plugin on server side
I1013 21:40:54.512665   31234 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-613120 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/kicbase/echo-server           │ functional-613120  │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ localhost/minikube-local-cache-test     │ functional-613120  │ c7c6e027fbdff │ 3.33kB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/my-image                      │ functional-613120  │ 44881df380b43 │ 1.47MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-613120 image ls --format table --alsologtostderr:
I1013 21:40:57.944252   31401 out.go:360] Setting OutFile to fd 1 ...
I1013 21:40:57.944489   31401 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 21:40:57.944498   31401 out.go:374] Setting ErrFile to fd 2...
I1013 21:40:57.944501   31401 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 21:40:57.944670   31401 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-15625/.minikube/bin
I1013 21:40:57.945188   31401 config.go:182] Loaded profile config "functional-613120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 21:40:57.945277   31401 config.go:182] Loaded profile config "functional-613120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 21:40:57.945704   31401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1013 21:40:57.945775   31401 main.go:141] libmachine: Launching plugin server for driver kvm2
I1013 21:40:57.959026   31401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40967
I1013 21:40:57.959538   31401 main.go:141] libmachine: () Calling .GetVersion
I1013 21:40:57.960102   31401 main.go:141] libmachine: Using API Version  1
I1013 21:40:57.960127   31401 main.go:141] libmachine: () Calling .SetConfigRaw
I1013 21:40:57.960616   31401 main.go:141] libmachine: () Calling .GetMachineName
I1013 21:40:57.960824   31401 main.go:141] libmachine: (functional-613120) Calling .GetState
I1013 21:40:57.962915   31401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1013 21:40:57.962958   31401 main.go:141] libmachine: Launching plugin server for driver kvm2
I1013 21:40:57.975982   31401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36065
I1013 21:40:57.976400   31401 main.go:141] libmachine: () Calling .GetVersion
I1013 21:40:57.976823   31401 main.go:141] libmachine: Using API Version  1
I1013 21:40:57.976843   31401 main.go:141] libmachine: () Calling .SetConfigRaw
I1013 21:40:57.977265   31401 main.go:141] libmachine: () Calling .GetMachineName
I1013 21:40:57.977476   31401 main.go:141] libmachine: (functional-613120) Calling .DriverName
I1013 21:40:57.977675   31401 ssh_runner.go:195] Run: systemctl --version
I1013 21:40:57.977698   31401 main.go:141] libmachine: (functional-613120) Calling .GetSSHHostname
I1013 21:40:57.980831   31401 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined MAC address 52:54:00:9f:28:1e in network mk-functional-613120
I1013 21:40:57.981340   31401 main.go:141] libmachine: (functional-613120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:28:1e", ip: ""} in network mk-functional-613120: {Iface:virbr1 ExpiryTime:2025-10-13 22:27:59 +0000 UTC Type:0 Mac:52:54:00:9f:28:1e Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-613120 Clientid:01:52:54:00:9f:28:1e}
I1013 21:40:57.981362   31401 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined IP address 192.168.39.113 and MAC address 52:54:00:9f:28:1e in network mk-functional-613120
I1013 21:40:57.981528   31401 main.go:141] libmachine: (functional-613120) Calling .GetSSHPort
I1013 21:40:57.981697   31401 main.go:141] libmachine: (functional-613120) Calling .GetSSHKeyPath
I1013 21:40:57.981869   31401 main.go:141] libmachine: (functional-613120) Calling .GetSSHUsername
I1013 21:40:57.982025   31401 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/functional-613120/id_rsa Username:docker}
I1013 21:40:58.068827   31401 ssh_runner.go:195] Run: sudo crictl images --output json
I1013 21:40:58.110499   31401 main.go:141] libmachine: Making call to close driver server
I1013 21:40:58.110516   31401 main.go:141] libmachine: (functional-613120) Calling .Close
I1013 21:40:58.110849   31401 main.go:141] libmachine: (functional-613120) DBG | Closing plugin on server side
I1013 21:40:58.110860   31401 main.go:141] libmachine: Successfully made call to close driver server
I1013 21:40:58.110873   31401 main.go:141] libmachine: Making call to close connection to plugin binary
I1013 21:40:58.110886   31401 main.go:141] libmachine: Making call to close driver server
I1013 21:40:58.110901   31401 main.go:141] libmachine: (functional-613120) Calling .Close
I1013 21:40:58.111118   31401 main.go:141] libmachine: (functional-613120) DBG | Closing plugin on server side
I1013 21:40:58.111118   31401 main.go:141] libmachine: Successfully made call to close driver server
I1013 21:40:58.111147   31401 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-613120 image ls --format json --alsologtostderr:
[{"id":"c7c6e027fbdff1d341f906178fcdf9a68c21df97ef2b7eb8edc7f2afb0df31a5","repoDigests":["localhost/minikube-local-cache-test@sha256:6f63fbf61c18c923c95deacedcefd8148a2ae5fd9f27cbc28dada426929830fa"],"repoTags":["localhost/minikube-local-cache-test:functional-613120"],"size":"3330"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-
scheduler:v1.34.1"],"size":"53844823"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"966d279ace024b0785ade562480786b537e260abd3e29a1b61855dedaf0b0334","repoDigests":["docker.io/library/2c48b77c241467bad213cbee1162d3b42bea46022990f9f8d49a9e99b78ae9a6-tmp@sha256:c4674f18d8e45e795e7538adcc0a31c9e388cbf8def0d5d9d980a8c0232a64f2"],"repoTags":[],"size":"1466018"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busyb
ox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-613120"],"size":"4943877"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["regist
ry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"cd073f4c5f6a8e9d
c6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-a
piserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"44881df380b43233c46df2a400d18ebfd8013ae134adf37725d33b3a412b01a8","repoDigests":["localhost/my-image@sha256:d73b5c3e734785a5b287f2f463fe3d3941bbb256e52be33b468098f730b35399"],"repoTags":["localhost/my-image:functional-613120"],"size":"1468600"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-613120 image ls --format json --alsologtostderr:
I1013 21:40:57.731007   31377 out.go:360] Setting OutFile to fd 1 ...
I1013 21:40:57.731264   31377 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 21:40:57.731272   31377 out.go:374] Setting ErrFile to fd 2...
I1013 21:40:57.731276   31377 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 21:40:57.731465   31377 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-15625/.minikube/bin
I1013 21:40:57.732130   31377 config.go:182] Loaded profile config "functional-613120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 21:40:57.732263   31377 config.go:182] Loaded profile config "functional-613120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 21:40:57.732665   31377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1013 21:40:57.732705   31377 main.go:141] libmachine: Launching plugin server for driver kvm2
I1013 21:40:57.745491   31377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44279
I1013 21:40:57.746104   31377 main.go:141] libmachine: () Calling .GetVersion
I1013 21:40:57.746888   31377 main.go:141] libmachine: Using API Version  1
I1013 21:40:57.746911   31377 main.go:141] libmachine: () Calling .SetConfigRaw
I1013 21:40:57.747544   31377 main.go:141] libmachine: () Calling .GetMachineName
I1013 21:40:57.747734   31377 main.go:141] libmachine: (functional-613120) Calling .GetState
I1013 21:40:57.749788   31377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1013 21:40:57.749828   31377 main.go:141] libmachine: Launching plugin server for driver kvm2
I1013 21:40:57.762899   31377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44009
I1013 21:40:57.763433   31377 main.go:141] libmachine: () Calling .GetVersion
I1013 21:40:57.763908   31377 main.go:141] libmachine: Using API Version  1
I1013 21:40:57.763931   31377 main.go:141] libmachine: () Calling .SetConfigRaw
I1013 21:40:57.764300   31377 main.go:141] libmachine: () Calling .GetMachineName
I1013 21:40:57.764487   31377 main.go:141] libmachine: (functional-613120) Calling .DriverName
I1013 21:40:57.764669   31377 ssh_runner.go:195] Run: systemctl --version
I1013 21:40:57.764691   31377 main.go:141] libmachine: (functional-613120) Calling .GetSSHHostname
I1013 21:40:57.767813   31377 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined MAC address 52:54:00:9f:28:1e in network mk-functional-613120
I1013 21:40:57.768297   31377 main.go:141] libmachine: (functional-613120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:28:1e", ip: ""} in network mk-functional-613120: {Iface:virbr1 ExpiryTime:2025-10-13 22:27:59 +0000 UTC Type:0 Mac:52:54:00:9f:28:1e Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-613120 Clientid:01:52:54:00:9f:28:1e}
I1013 21:40:57.768327   31377 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined IP address 192.168.39.113 and MAC address 52:54:00:9f:28:1e in network mk-functional-613120
I1013 21:40:57.768474   31377 main.go:141] libmachine: (functional-613120) Calling .GetSSHPort
I1013 21:40:57.768641   31377 main.go:141] libmachine: (functional-613120) Calling .GetSSHKeyPath
I1013 21:40:57.768781   31377 main.go:141] libmachine: (functional-613120) Calling .GetSSHUsername
I1013 21:40:57.768976   31377 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/functional-613120/id_rsa Username:docker}
I1013 21:40:57.854731   31377 ssh_runner.go:195] Run: sudo crictl images --output json
I1013 21:40:57.894320   31377 main.go:141] libmachine: Making call to close driver server
I1013 21:40:57.894333   31377 main.go:141] libmachine: (functional-613120) Calling .Close
I1013 21:40:57.894616   31377 main.go:141] libmachine: (functional-613120) DBG | Closing plugin on server side
I1013 21:40:57.894644   31377 main.go:141] libmachine: Successfully made call to close driver server
I1013 21:40:57.894656   31377 main.go:141] libmachine: Making call to close connection to plugin binary
I1013 21:40:57.894665   31377 main.go:141] libmachine: Making call to close driver server
I1013 21:40:57.894673   31377 main.go:141] libmachine: (functional-613120) Calling .Close
I1013 21:40:57.894937   31377 main.go:141] libmachine: (functional-613120) DBG | Closing plugin on server side
I1013 21:40:57.895012   31377 main.go:141] libmachine: Successfully made call to close driver server
I1013 21:40:57.895054   31377 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-613120 image ls --format yaml --alsologtostderr:
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: c7c6e027fbdff1d341f906178fcdf9a68c21df97ef2b7eb8edc7f2afb0df31a5
repoDigests:
- localhost/minikube-local-cache-test@sha256:6f63fbf61c18c923c95deacedcefd8148a2ae5fd9f27cbc28dada426929830fa
repoTags:
- localhost/minikube-local-cache-test:functional-613120
size: "3330"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-613120
size: "4943877"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-613120 image ls --format yaml --alsologtostderr:
I1013 21:40:54.562064   31258 out.go:360] Setting OutFile to fd 1 ...
I1013 21:40:54.562373   31258 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 21:40:54.562384   31258 out.go:374] Setting ErrFile to fd 2...
I1013 21:40:54.562388   31258 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 21:40:54.562598   31258 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-15625/.minikube/bin
I1013 21:40:54.563137   31258 config.go:182] Loaded profile config "functional-613120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 21:40:54.563246   31258 config.go:182] Loaded profile config "functional-613120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 21:40:54.563604   31258 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1013 21:40:54.563652   31258 main.go:141] libmachine: Launching plugin server for driver kvm2
I1013 21:40:54.577189   31258 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44673
I1013 21:40:54.577640   31258 main.go:141] libmachine: () Calling .GetVersion
I1013 21:40:54.578151   31258 main.go:141] libmachine: Using API Version  1
I1013 21:40:54.578179   31258 main.go:141] libmachine: () Calling .SetConfigRaw
I1013 21:40:54.578598   31258 main.go:141] libmachine: () Calling .GetMachineName
I1013 21:40:54.578824   31258 main.go:141] libmachine: (functional-613120) Calling .GetState
I1013 21:40:54.580709   31258 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1013 21:40:54.580756   31258 main.go:141] libmachine: Launching plugin server for driver kvm2
I1013 21:40:54.593140   31258 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36109
I1013 21:40:54.593494   31258 main.go:141] libmachine: () Calling .GetVersion
I1013 21:40:54.593926   31258 main.go:141] libmachine: Using API Version  1
I1013 21:40:54.593946   31258 main.go:141] libmachine: () Calling .SetConfigRaw
I1013 21:40:54.594395   31258 main.go:141] libmachine: () Calling .GetMachineName
I1013 21:40:54.594567   31258 main.go:141] libmachine: (functional-613120) Calling .DriverName
I1013 21:40:54.594763   31258 ssh_runner.go:195] Run: systemctl --version
I1013 21:40:54.594785   31258 main.go:141] libmachine: (functional-613120) Calling .GetSSHHostname
I1013 21:40:54.597382   31258 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined MAC address 52:54:00:9f:28:1e in network mk-functional-613120
I1013 21:40:54.597800   31258 main.go:141] libmachine: (functional-613120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:28:1e", ip: ""} in network mk-functional-613120: {Iface:virbr1 ExpiryTime:2025-10-13 22:27:59 +0000 UTC Type:0 Mac:52:54:00:9f:28:1e Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-613120 Clientid:01:52:54:00:9f:28:1e}
I1013 21:40:54.597835   31258 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined IP address 192.168.39.113 and MAC address 52:54:00:9f:28:1e in network mk-functional-613120
I1013 21:40:54.597964   31258 main.go:141] libmachine: (functional-613120) Calling .GetSSHPort
I1013 21:40:54.598125   31258 main.go:141] libmachine: (functional-613120) Calling .GetSSHKeyPath
I1013 21:40:54.598292   31258 main.go:141] libmachine: (functional-613120) Calling .GetSSHUsername
I1013 21:40:54.598497   31258 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/functional-613120/id_rsa Username:docker}
I1013 21:40:54.684335   31258 ssh_runner.go:195] Run: sudo crictl images --output json
I1013 21:40:54.724224   31258 main.go:141] libmachine: Making call to close driver server
I1013 21:40:54.724242   31258 main.go:141] libmachine: (functional-613120) Calling .Close
I1013 21:40:54.724501   31258 main.go:141] libmachine: Successfully made call to close driver server
I1013 21:40:54.724521   31258 main.go:141] libmachine: Making call to close connection to plugin binary
I1013 21:40:54.724530   31258 main.go:141] libmachine: Making call to close driver server
I1013 21:40:54.724539   31258 main.go:141] libmachine: (functional-613120) Calling .Close
I1013 21:40:54.724541   31258 main.go:141] libmachine: (functional-613120) DBG | Closing plugin on server side
I1013 21:40:54.724843   31258 main.go:141] libmachine: (functional-613120) DBG | Closing plugin on server side
I1013 21:40:54.724884   31258 main.go:141] libmachine: Successfully made call to close driver server
I1013 21:40:54.724903   31258 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-613120 ssh pgrep buildkitd: exit status 1 (188.094537ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 image build -t localhost/my-image:functional-613120 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-613120 image build -t localhost/my-image:functional-613120 testdata/build --alsologtostderr: (2.54600015s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-613120 image build -t localhost/my-image:functional-613120 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 966d279ace0
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-613120
--> 44881df380b
Successfully tagged localhost/my-image:functional-613120
44881df380b43233c46df2a400d18ebfd8013ae134adf37725d33b3a412b01a8
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-613120 image build -t localhost/my-image:functional-613120 testdata/build --alsologtostderr:
I1013 21:40:54.963481   31312 out.go:360] Setting OutFile to fd 1 ...
I1013 21:40:54.964048   31312 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 21:40:54.964070   31312 out.go:374] Setting ErrFile to fd 2...
I1013 21:40:54.964078   31312 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 21:40:54.964649   31312 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-15625/.minikube/bin
I1013 21:40:54.965519   31312 config.go:182] Loaded profile config "functional-613120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 21:40:54.966148   31312 config.go:182] Loaded profile config "functional-613120": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 21:40:54.966528   31312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1013 21:40:54.966600   31312 main.go:141] libmachine: Launching plugin server for driver kvm2
I1013 21:40:54.979907   31312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40295
I1013 21:40:54.980417   31312 main.go:141] libmachine: () Calling .GetVersion
I1013 21:40:54.980945   31312 main.go:141] libmachine: Using API Version  1
I1013 21:40:54.980967   31312 main.go:141] libmachine: () Calling .SetConfigRaw
I1013 21:40:54.981371   31312 main.go:141] libmachine: () Calling .GetMachineName
I1013 21:40:54.981568   31312 main.go:141] libmachine: (functional-613120) Calling .GetState
I1013 21:40:54.983982   31312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1013 21:40:54.984047   31312 main.go:141] libmachine: Launching plugin server for driver kvm2
I1013 21:40:54.997141   31312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34051
I1013 21:40:54.997578   31312 main.go:141] libmachine: () Calling .GetVersion
I1013 21:40:54.997980   31312 main.go:141] libmachine: Using API Version  1
I1013 21:40:54.998006   31312 main.go:141] libmachine: () Calling .SetConfigRaw
I1013 21:40:54.998369   31312 main.go:141] libmachine: () Calling .GetMachineName
I1013 21:40:54.998554   31312 main.go:141] libmachine: (functional-613120) Calling .DriverName
I1013 21:40:54.998769   31312 ssh_runner.go:195] Run: systemctl --version
I1013 21:40:54.998792   31312 main.go:141] libmachine: (functional-613120) Calling .GetSSHHostname
I1013 21:40:55.001563   31312 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined MAC address 52:54:00:9f:28:1e in network mk-functional-613120
I1013 21:40:55.002019   31312 main.go:141] libmachine: (functional-613120) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:28:1e", ip: ""} in network mk-functional-613120: {Iface:virbr1 ExpiryTime:2025-10-13 22:27:59 +0000 UTC Type:0 Mac:52:54:00:9f:28:1e Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:functional-613120 Clientid:01:52:54:00:9f:28:1e}
I1013 21:40:55.002050   31312 main.go:141] libmachine: (functional-613120) DBG | domain functional-613120 has defined IP address 192.168.39.113 and MAC address 52:54:00:9f:28:1e in network mk-functional-613120
I1013 21:40:55.002180   31312 main.go:141] libmachine: (functional-613120) Calling .GetSSHPort
I1013 21:40:55.002359   31312 main.go:141] libmachine: (functional-613120) Calling .GetSSHKeyPath
I1013 21:40:55.002512   31312 main.go:141] libmachine: (functional-613120) Calling .GetSSHUsername
I1013 21:40:55.002641   31312 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/functional-613120/id_rsa Username:docker}
I1013 21:40:55.091808   31312 build_images.go:161] Building image from path: /tmp/build.17664185.tar
I1013 21:40:55.091865   31312 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1013 21:40:55.107836   31312 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.17664185.tar
I1013 21:40:55.114245   31312 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.17664185.tar: stat -c "%s %y" /var/lib/minikube/build/build.17664185.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.17664185.tar': No such file or directory
I1013 21:40:55.114285   31312 ssh_runner.go:362] scp /tmp/build.17664185.tar --> /var/lib/minikube/build/build.17664185.tar (3072 bytes)
I1013 21:40:55.149468   31312 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.17664185
I1013 21:40:55.162324   31312 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.17664185 -xf /var/lib/minikube/build/build.17664185.tar
I1013 21:40:55.173721   31312 crio.go:315] Building image: /var/lib/minikube/build/build.17664185
I1013 21:40:55.173776   31312 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-613120 /var/lib/minikube/build/build.17664185 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1013 21:40:57.431052   31312 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-613120 /var/lib/minikube/build/build.17664185 --cgroup-manager=cgroupfs: (2.257255356s)
I1013 21:40:57.431119   31312 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.17664185
I1013 21:40:57.447798   31312 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.17664185.tar
I1013 21:40:57.460353   31312 build_images.go:217] Built localhost/my-image:functional-613120 from /tmp/build.17664185.tar
I1013 21:40:57.460394   31312 build_images.go:133] succeeded building to: functional-613120
I1013 21:40:57.460401   31312 build_images.go:134] failed building to: 
I1013 21:40:57.460429   31312 main.go:141] libmachine: Making call to close driver server
I1013 21:40:57.460449   31312 main.go:141] libmachine: (functional-613120) Calling .Close
I1013 21:40:57.460737   31312 main.go:141] libmachine: Successfully made call to close driver server
I1013 21:40:57.460760   31312 main.go:141] libmachine: (functional-613120) DBG | Closing plugin on server side
I1013 21:40:57.460782   31312 main.go:141] libmachine: Making call to close connection to plugin binary
I1013 21:40:57.460796   31312 main.go:141] libmachine: Making call to close driver server
I1013 21:40:57.460803   31312 main.go:141] libmachine: (functional-613120) Calling .Close
I1013 21:40:57.461042   31312 main.go:141] libmachine: (functional-613120) DBG | Closing plugin on server side
I1013 21:40:57.461070   31312 main.go:141] libmachine: Successfully made call to close driver server
I1013 21:40:57.461080   31312 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-613120
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 image load --daemon kicbase/echo-server:functional-613120 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-613120 image load --daemon kicbase/echo-server:functional-613120 --alsologtostderr: (1.113457681s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 image load --daemon kicbase/echo-server:functional-613120 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-613120
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 image load --daemon kicbase/echo-server:functional-613120 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 image save kicbase/echo-server:functional-613120 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 image rm kicbase/echo-server:functional-613120 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-613120
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 image save --daemon kicbase/echo-server:functional-613120 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-613120
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 update-context --alsologtostderr -v=2
E1013 21:42:13.024017   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:45:49.943336   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-613120 service list: (1.235728106s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-613120 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-613120 service list -o json: (1.2350343s)
functional_test.go:1504: Took "1.235136812s" to run "out/minikube-linux-amd64 -p functional-613120 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.24s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-613120
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-613120
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-613120
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (224.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1013 21:50:49.951359   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:50:51.006092   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:50:51.012465   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:50:51.023869   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:50:51.045308   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:50:51.086708   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:50:51.168223   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:50:51.329781   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:50:51.651389   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:50:52.292717   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:50:53.574523   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:50:56.136519   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:51:01.257891   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:51:11.499529   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:51:31.981409   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:52:12.943683   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:53:34.866418   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-592603 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (3m44.1580847s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (224.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-592603 kubectl -- rollout status deployment/busybox: (4.057988351s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 kubectl -- exec busybox-7b57f96db7-4c9l8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 kubectl -- exec busybox-7b57f96db7-g9f4h -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 kubectl -- exec busybox-7b57f96db7-kwpmf -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 kubectl -- exec busybox-7b57f96db7-4c9l8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 kubectl -- exec busybox-7b57f96db7-g9f4h -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 kubectl -- exec busybox-7b57f96db7-kwpmf -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 kubectl -- exec busybox-7b57f96db7-4c9l8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 kubectl -- exec busybox-7b57f96db7-g9f4h -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 kubectl -- exec busybox-7b57f96db7-kwpmf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 kubectl -- exec busybox-7b57f96db7-4c9l8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 kubectl -- exec busybox-7b57f96db7-4c9l8 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 kubectl -- exec busybox-7b57f96db7-g9f4h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 kubectl -- exec busybox-7b57f96db7-g9f4h -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 kubectl -- exec busybox-7b57f96db7-kwpmf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 kubectl -- exec busybox-7b57f96db7-kwpmf -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (47.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-592603 node add --alsologtostderr -v 5: (46.473031942s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (47.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-592603 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 cp testdata/cp-test.txt ha-592603:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 ssh -n ha-592603 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 cp ha-592603:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1561097291/001/cp-test_ha-592603.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 ssh -n ha-592603 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 cp ha-592603:/home/docker/cp-test.txt ha-592603-m02:/home/docker/cp-test_ha-592603_ha-592603-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 ssh -n ha-592603 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 ssh -n ha-592603-m02 "sudo cat /home/docker/cp-test_ha-592603_ha-592603-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 cp ha-592603:/home/docker/cp-test.txt ha-592603-m03:/home/docker/cp-test_ha-592603_ha-592603-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 ssh -n ha-592603 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 ssh -n ha-592603-m03 "sudo cat /home/docker/cp-test_ha-592603_ha-592603-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 cp ha-592603:/home/docker/cp-test.txt ha-592603-m04:/home/docker/cp-test_ha-592603_ha-592603-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 ssh -n ha-592603 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 ssh -n ha-592603-m04 "sudo cat /home/docker/cp-test_ha-592603_ha-592603-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 cp testdata/cp-test.txt ha-592603-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 ssh -n ha-592603-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 cp ha-592603-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1561097291/001/cp-test_ha-592603-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 ssh -n ha-592603-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 cp ha-592603-m02:/home/docker/cp-test.txt ha-592603:/home/docker/cp-test_ha-592603-m02_ha-592603.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 ssh -n ha-592603-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 ssh -n ha-592603 "sudo cat /home/docker/cp-test_ha-592603-m02_ha-592603.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 cp ha-592603-m02:/home/docker/cp-test.txt ha-592603-m03:/home/docker/cp-test_ha-592603-m02_ha-592603-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 ssh -n ha-592603-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 ssh -n ha-592603-m03 "sudo cat /home/docker/cp-test_ha-592603-m02_ha-592603-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 cp ha-592603-m02:/home/docker/cp-test.txt ha-592603-m04:/home/docker/cp-test_ha-592603-m02_ha-592603-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 ssh -n ha-592603-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 ssh -n ha-592603-m04 "sudo cat /home/docker/cp-test_ha-592603-m02_ha-592603-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 cp testdata/cp-test.txt ha-592603-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 ssh -n ha-592603-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 cp ha-592603-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1561097291/001/cp-test_ha-592603-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 ssh -n ha-592603-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 cp ha-592603-m03:/home/docker/cp-test.txt ha-592603:/home/docker/cp-test_ha-592603-m03_ha-592603.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 ssh -n ha-592603-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 ssh -n ha-592603 "sudo cat /home/docker/cp-test_ha-592603-m03_ha-592603.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 cp ha-592603-m03:/home/docker/cp-test.txt ha-592603-m02:/home/docker/cp-test_ha-592603-m03_ha-592603-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 ssh -n ha-592603-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 ssh -n ha-592603-m02 "sudo cat /home/docker/cp-test_ha-592603-m03_ha-592603-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 cp ha-592603-m03:/home/docker/cp-test.txt ha-592603-m04:/home/docker/cp-test_ha-592603-m03_ha-592603-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 ssh -n ha-592603-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 ssh -n ha-592603-m04 "sudo cat /home/docker/cp-test_ha-592603-m03_ha-592603-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 cp testdata/cp-test.txt ha-592603-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 ssh -n ha-592603-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 cp ha-592603-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1561097291/001/cp-test_ha-592603-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 ssh -n ha-592603-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 cp ha-592603-m04:/home/docker/cp-test.txt ha-592603:/home/docker/cp-test_ha-592603-m04_ha-592603.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 ssh -n ha-592603-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 ssh -n ha-592603 "sudo cat /home/docker/cp-test_ha-592603-m04_ha-592603.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 cp ha-592603-m04:/home/docker/cp-test.txt ha-592603-m02:/home/docker/cp-test_ha-592603-m04_ha-592603-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 ssh -n ha-592603-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 ssh -n ha-592603-m02 "sudo cat /home/docker/cp-test_ha-592603-m04_ha-592603-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 cp ha-592603-m04:/home/docker/cp-test.txt ha-592603-m03:/home/docker/cp-test_ha-592603-m04_ha-592603-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 ssh -n ha-592603-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 ssh -n ha-592603-m03 "sudo cat /home/docker/cp-test_ha-592603-m04_ha-592603-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (88.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 node stop m02 --alsologtostderr -v 5
E1013 21:55:49.943384   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:55:51.005961   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:56:18.708181   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-592603 node stop m02 --alsologtostderr -v 5: (1m27.645916007s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-592603 status --alsologtostderr -v 5: exit status 7 (666.974605ms)

                                                
                                                
-- stdout --
	ha-592603
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-592603-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-592603-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-592603-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:56:31.210282   38466 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:56:31.210512   38466 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:56:31.210520   38466 out.go:374] Setting ErrFile to fd 2...
	I1013 21:56:31.210524   38466 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:56:31.210719   38466 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-15625/.minikube/bin
	I1013 21:56:31.210910   38466 out.go:368] Setting JSON to false
	I1013 21:56:31.210937   38466 mustload.go:65] Loading cluster: ha-592603
	I1013 21:56:31.211042   38466 notify.go:220] Checking for updates...
	I1013 21:56:31.211361   38466 config.go:182] Loaded profile config "ha-592603": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:56:31.211377   38466 status.go:174] checking status of ha-592603 ...
	I1013 21:56:31.211769   38466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:56:31.211803   38466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:56:31.225779   38466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36659
	I1013 21:56:31.226371   38466 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:56:31.226937   38466 main.go:141] libmachine: Using API Version  1
	I1013 21:56:31.226971   38466 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:56:31.227331   38466 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:56:31.227510   38466 main.go:141] libmachine: (ha-592603) Calling .GetState
	I1013 21:56:31.229311   38466 status.go:371] ha-592603 host status = "Running" (err=<nil>)
	I1013 21:56:31.229329   38466 host.go:66] Checking if "ha-592603" exists ...
	I1013 21:56:31.229789   38466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:56:31.229836   38466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:56:31.243132   38466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36427
	I1013 21:56:31.243680   38466 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:56:31.244194   38466 main.go:141] libmachine: Using API Version  1
	I1013 21:56:31.244243   38466 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:56:31.244590   38466 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:56:31.244804   38466 main.go:141] libmachine: (ha-592603) Calling .GetIP
	I1013 21:56:31.248074   38466 main.go:141] libmachine: (ha-592603) DBG | domain ha-592603 has defined MAC address 52:54:00:a5:68:3d in network mk-ha-592603
	I1013 21:56:31.248539   38466 main.go:141] libmachine: (ha-592603) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:68:3d", ip: ""} in network mk-ha-592603: {Iface:virbr1 ExpiryTime:2025-10-13 22:50:25 +0000 UTC Type:0 Mac:52:54:00:a5:68:3d Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:ha-592603 Clientid:01:52:54:00:a5:68:3d}
	I1013 21:56:31.248562   38466 main.go:141] libmachine: (ha-592603) DBG | domain ha-592603 has defined IP address 192.168.39.34 and MAC address 52:54:00:a5:68:3d in network mk-ha-592603
	I1013 21:56:31.248721   38466 host.go:66] Checking if "ha-592603" exists ...
	I1013 21:56:31.249197   38466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:56:31.249245   38466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:56:31.262221   38466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42369
	I1013 21:56:31.262632   38466 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:56:31.263041   38466 main.go:141] libmachine: Using API Version  1
	I1013 21:56:31.263060   38466 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:56:31.263392   38466 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:56:31.263557   38466 main.go:141] libmachine: (ha-592603) Calling .DriverName
	I1013 21:56:31.263750   38466 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 21:56:31.263771   38466 main.go:141] libmachine: (ha-592603) Calling .GetSSHHostname
	I1013 21:56:31.266658   38466 main.go:141] libmachine: (ha-592603) DBG | domain ha-592603 has defined MAC address 52:54:00:a5:68:3d in network mk-ha-592603
	I1013 21:56:31.267096   38466 main.go:141] libmachine: (ha-592603) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:68:3d", ip: ""} in network mk-ha-592603: {Iface:virbr1 ExpiryTime:2025-10-13 22:50:25 +0000 UTC Type:0 Mac:52:54:00:a5:68:3d Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:ha-592603 Clientid:01:52:54:00:a5:68:3d}
	I1013 21:56:31.267135   38466 main.go:141] libmachine: (ha-592603) DBG | domain ha-592603 has defined IP address 192.168.39.34 and MAC address 52:54:00:a5:68:3d in network mk-ha-592603
	I1013 21:56:31.267339   38466 main.go:141] libmachine: (ha-592603) Calling .GetSSHPort
	I1013 21:56:31.267529   38466 main.go:141] libmachine: (ha-592603) Calling .GetSSHKeyPath
	I1013 21:56:31.267726   38466 main.go:141] libmachine: (ha-592603) Calling .GetSSHUsername
	I1013 21:56:31.267907   38466 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/ha-592603/id_rsa Username:docker}
	I1013 21:56:31.356657   38466 ssh_runner.go:195] Run: systemctl --version
	I1013 21:56:31.366490   38466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 21:56:31.385143   38466 kubeconfig.go:125] found "ha-592603" server: "https://192.168.39.254:8443"
	I1013 21:56:31.385211   38466 api_server.go:166] Checking apiserver status ...
	I1013 21:56:31.385251   38466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 21:56:31.407853   38466 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1382/cgroup
	W1013 21:56:31.421768   38466 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1382/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1013 21:56:31.421818   38466 ssh_runner.go:195] Run: ls
	I1013 21:56:31.428725   38466 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1013 21:56:31.434072   38466 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1013 21:56:31.434095   38466 status.go:463] ha-592603 apiserver status = Running (err=<nil>)
	I1013 21:56:31.434106   38466 status.go:176] ha-592603 status: &{Name:ha-592603 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 21:56:31.434127   38466 status.go:174] checking status of ha-592603-m02 ...
	I1013 21:56:31.434443   38466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:56:31.434477   38466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:56:31.447228   38466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42723
	I1013 21:56:31.447773   38466 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:56:31.448286   38466 main.go:141] libmachine: Using API Version  1
	I1013 21:56:31.448315   38466 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:56:31.448603   38466 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:56:31.448789   38466 main.go:141] libmachine: (ha-592603-m02) Calling .GetState
	I1013 21:56:31.450310   38466 status.go:371] ha-592603-m02 host status = "Stopped" (err=<nil>)
	I1013 21:56:31.450321   38466 status.go:384] host is not running, skipping remaining checks
	I1013 21:56:31.450326   38466 status.go:176] ha-592603-m02 status: &{Name:ha-592603-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 21:56:31.450338   38466 status.go:174] checking status of ha-592603-m03 ...
	I1013 21:56:31.450665   38466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:56:31.450700   38466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:56:31.465441   38466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38231
	I1013 21:56:31.465843   38466 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:56:31.466270   38466 main.go:141] libmachine: Using API Version  1
	I1013 21:56:31.466294   38466 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:56:31.466605   38466 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:56:31.466843   38466 main.go:141] libmachine: (ha-592603-m03) Calling .GetState
	I1013 21:56:31.468567   38466 status.go:371] ha-592603-m03 host status = "Running" (err=<nil>)
	I1013 21:56:31.468588   38466 host.go:66] Checking if "ha-592603-m03" exists ...
	I1013 21:56:31.468867   38466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:56:31.468900   38466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:56:31.482000   38466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46661
	I1013 21:56:31.482406   38466 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:56:31.482807   38466 main.go:141] libmachine: Using API Version  1
	I1013 21:56:31.482826   38466 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:56:31.483124   38466 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:56:31.483331   38466 main.go:141] libmachine: (ha-592603-m03) Calling .GetIP
	I1013 21:56:31.486310   38466 main.go:141] libmachine: (ha-592603-m03) DBG | domain ha-592603-m03 has defined MAC address 52:54:00:3b:01:fc in network mk-ha-592603
	I1013 21:56:31.486806   38466 main.go:141] libmachine: (ha-592603-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:01:fc", ip: ""} in network mk-ha-592603: {Iface:virbr1 ExpiryTime:2025-10-13 22:52:38 +0000 UTC Type:0 Mac:52:54:00:3b:01:fc Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-592603-m03 Clientid:01:52:54:00:3b:01:fc}
	I1013 21:56:31.486849   38466 main.go:141] libmachine: (ha-592603-m03) DBG | domain ha-592603-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:3b:01:fc in network mk-ha-592603
	I1013 21:56:31.487011   38466 host.go:66] Checking if "ha-592603-m03" exists ...
	I1013 21:56:31.487337   38466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:56:31.487375   38466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:56:31.499815   38466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36197
	I1013 21:56:31.500319   38466 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:56:31.500830   38466 main.go:141] libmachine: Using API Version  1
	I1013 21:56:31.500859   38466 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:56:31.501195   38466 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:56:31.501376   38466 main.go:141] libmachine: (ha-592603-m03) Calling .DriverName
	I1013 21:56:31.501549   38466 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 21:56:31.501568   38466 main.go:141] libmachine: (ha-592603-m03) Calling .GetSSHHostname
	I1013 21:56:31.504694   38466 main.go:141] libmachine: (ha-592603-m03) DBG | domain ha-592603-m03 has defined MAC address 52:54:00:3b:01:fc in network mk-ha-592603
	I1013 21:56:31.505187   38466 main.go:141] libmachine: (ha-592603-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:01:fc", ip: ""} in network mk-ha-592603: {Iface:virbr1 ExpiryTime:2025-10-13 22:52:38 +0000 UTC Type:0 Mac:52:54:00:3b:01:fc Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-592603-m03 Clientid:01:52:54:00:3b:01:fc}
	I1013 21:56:31.505216   38466 main.go:141] libmachine: (ha-592603-m03) DBG | domain ha-592603-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:3b:01:fc in network mk-ha-592603
	I1013 21:56:31.505396   38466 main.go:141] libmachine: (ha-592603-m03) Calling .GetSSHPort
	I1013 21:56:31.505554   38466 main.go:141] libmachine: (ha-592603-m03) Calling .GetSSHKeyPath
	I1013 21:56:31.505682   38466 main.go:141] libmachine: (ha-592603-m03) Calling .GetSSHUsername
	I1013 21:56:31.505780   38466 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/ha-592603-m03/id_rsa Username:docker}
	I1013 21:56:31.591627   38466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 21:56:31.619818   38466 kubeconfig.go:125] found "ha-592603" server: "https://192.168.39.254:8443"
	I1013 21:56:31.619853   38466 api_server.go:166] Checking apiserver status ...
	I1013 21:56:31.619917   38466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 21:56:31.641210   38466 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1812/cgroup
	W1013 21:56:31.653997   38466 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1812/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1013 21:56:31.654047   38466 ssh_runner.go:195] Run: ls
	I1013 21:56:31.660087   38466 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1013 21:56:31.664826   38466 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1013 21:56:31.664855   38466 status.go:463] ha-592603-m03 apiserver status = Running (err=<nil>)
	I1013 21:56:31.664865   38466 status.go:176] ha-592603-m03 status: &{Name:ha-592603-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 21:56:31.664885   38466 status.go:174] checking status of ha-592603-m04 ...
	I1013 21:56:31.665275   38466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:56:31.665345   38466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:56:31.678690   38466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46517
	I1013 21:56:31.679190   38466 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:56:31.679626   38466 main.go:141] libmachine: Using API Version  1
	I1013 21:56:31.679653   38466 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:56:31.679989   38466 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:56:31.680146   38466 main.go:141] libmachine: (ha-592603-m04) Calling .GetState
	I1013 21:56:31.681779   38466 status.go:371] ha-592603-m04 host status = "Running" (err=<nil>)
	I1013 21:56:31.681796   38466 host.go:66] Checking if "ha-592603-m04" exists ...
	I1013 21:56:31.682067   38466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:56:31.682101   38466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:56:31.696014   38466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39891
	I1013 21:56:31.696583   38466 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:56:31.697067   38466 main.go:141] libmachine: Using API Version  1
	I1013 21:56:31.697091   38466 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:56:31.697421   38466 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:56:31.697621   38466 main.go:141] libmachine: (ha-592603-m04) Calling .GetIP
	I1013 21:56:31.700449   38466 main.go:141] libmachine: (ha-592603-m04) DBG | domain ha-592603-m04 has defined MAC address 52:54:00:8a:ca:71 in network mk-ha-592603
	I1013 21:56:31.700892   38466 main.go:141] libmachine: (ha-592603-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ca:71", ip: ""} in network mk-ha-592603: {Iface:virbr1 ExpiryTime:2025-10-13 22:54:19 +0000 UTC Type:0 Mac:52:54:00:8a:ca:71 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-592603-m04 Clientid:01:52:54:00:8a:ca:71}
	I1013 21:56:31.700919   38466 main.go:141] libmachine: (ha-592603-m04) DBG | domain ha-592603-m04 has defined IP address 192.168.39.205 and MAC address 52:54:00:8a:ca:71 in network mk-ha-592603
	I1013 21:56:31.701074   38466 host.go:66] Checking if "ha-592603-m04" exists ...
	I1013 21:56:31.701397   38466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 21:56:31.701451   38466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 21:56:31.714752   38466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37899
	I1013 21:56:31.715235   38466 main.go:141] libmachine: () Calling .GetVersion
	I1013 21:56:31.715740   38466 main.go:141] libmachine: Using API Version  1
	I1013 21:56:31.715760   38466 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 21:56:31.716082   38466 main.go:141] libmachine: () Calling .GetMachineName
	I1013 21:56:31.716282   38466 main.go:141] libmachine: (ha-592603-m04) Calling .DriverName
	I1013 21:56:31.716463   38466 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 21:56:31.716481   38466 main.go:141] libmachine: (ha-592603-m04) Calling .GetSSHHostname
	I1013 21:56:31.719404   38466 main.go:141] libmachine: (ha-592603-m04) DBG | domain ha-592603-m04 has defined MAC address 52:54:00:8a:ca:71 in network mk-ha-592603
	I1013 21:56:31.719873   38466 main.go:141] libmachine: (ha-592603-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ca:71", ip: ""} in network mk-ha-592603: {Iface:virbr1 ExpiryTime:2025-10-13 22:54:19 +0000 UTC Type:0 Mac:52:54:00:8a:ca:71 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-592603-m04 Clientid:01:52:54:00:8a:ca:71}
	I1013 21:56:31.719929   38466 main.go:141] libmachine: (ha-592603-m04) DBG | domain ha-592603-m04 has defined IP address 192.168.39.205 and MAC address 52:54:00:8a:ca:71 in network mk-ha-592603
	I1013 21:56:31.720071   38466 main.go:141] libmachine: (ha-592603-m04) Calling .GetSSHPort
	I1013 21:56:31.720247   38466 main.go:141] libmachine: (ha-592603-m04) Calling .GetSSHKeyPath
	I1013 21:56:31.720404   38466 main.go:141] libmachine: (ha-592603-m04) Calling .GetSSHUsername
	I1013 21:56:31.720545   38466 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/ha-592603-m04/id_rsa Username:docker}
	I1013 21:56:31.809565   38466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 21:56:31.829816   38466 status.go:176] ha-592603-m04 status: &{Name:ha-592603-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (88.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (34.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-592603 node start m02 --alsologtostderr -v 5: (32.870373269s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-592603 status --alsologtostderr -v 5: (1.051169508s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (34.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.014814871s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (391.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 stop --alsologtostderr -v 5
E1013 21:58:53.025741   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:00:49.950117   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:00:51.006195   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-592603 stop --alsologtostderr -v 5: (4m26.586952342s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 start --wait true --alsologtostderr -v 5
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-592603 start --wait true --alsologtostderr -v 5: (2m4.724560883s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (391.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (19.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-592603 node delete m03 --alsologtostderr -v 5: (18.471572256s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (19.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (248.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 stop --alsologtostderr -v 5
E1013 22:05:49.944414   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:05:51.005836   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:07:14.071066   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-592603 stop --alsologtostderr -v 5: (4m8.387641428s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-592603 status --alsologtostderr -v 5: exit status 7 (111.273302ms)

                                                
                                                
-- stdout --
	ha-592603
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-592603-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-592603-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 22:08:07.398355   42531 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:08:07.398607   42531 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:08:07.398617   42531 out.go:374] Setting ErrFile to fd 2...
	I1013 22:08:07.398623   42531 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:08:07.398825   42531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-15625/.minikube/bin
	I1013 22:08:07.399022   42531 out.go:368] Setting JSON to false
	I1013 22:08:07.399052   42531 mustload.go:65] Loading cluster: ha-592603
	I1013 22:08:07.399080   42531 notify.go:220] Checking for updates...
	I1013 22:08:07.399428   42531 config.go:182] Loaded profile config "ha-592603": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:08:07.399445   42531 status.go:174] checking status of ha-592603 ...
	I1013 22:08:07.399873   42531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 22:08:07.399935   42531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 22:08:07.425675   42531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40973
	I1013 22:08:07.426268   42531 main.go:141] libmachine: () Calling .GetVersion
	I1013 22:08:07.426825   42531 main.go:141] libmachine: Using API Version  1
	I1013 22:08:07.426845   42531 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 22:08:07.427252   42531 main.go:141] libmachine: () Calling .GetMachineName
	I1013 22:08:07.427522   42531 main.go:141] libmachine: (ha-592603) Calling .GetState
	I1013 22:08:07.429378   42531 status.go:371] ha-592603 host status = "Stopped" (err=<nil>)
	I1013 22:08:07.429398   42531 status.go:384] host is not running, skipping remaining checks
	I1013 22:08:07.429403   42531 status.go:176] ha-592603 status: &{Name:ha-592603 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 22:08:07.429417   42531 status.go:174] checking status of ha-592603-m02 ...
	I1013 22:08:07.429713   42531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 22:08:07.429748   42531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 22:08:07.442797   42531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45743
	I1013 22:08:07.443203   42531 main.go:141] libmachine: () Calling .GetVersion
	I1013 22:08:07.443603   42531 main.go:141] libmachine: Using API Version  1
	I1013 22:08:07.443643   42531 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 22:08:07.444106   42531 main.go:141] libmachine: () Calling .GetMachineName
	I1013 22:08:07.444387   42531 main.go:141] libmachine: (ha-592603-m02) Calling .GetState
	I1013 22:08:07.446222   42531 status.go:371] ha-592603-m02 host status = "Stopped" (err=<nil>)
	I1013 22:08:07.446241   42531 status.go:384] host is not running, skipping remaining checks
	I1013 22:08:07.446248   42531 status.go:176] ha-592603-m02 status: &{Name:ha-592603-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 22:08:07.446268   42531 status.go:174] checking status of ha-592603-m04 ...
	I1013 22:08:07.446550   42531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 22:08:07.446583   42531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 22:08:07.459404   42531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45111
	I1013 22:08:07.459812   42531 main.go:141] libmachine: () Calling .GetVersion
	I1013 22:08:07.460360   42531 main.go:141] libmachine: Using API Version  1
	I1013 22:08:07.460381   42531 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 22:08:07.460745   42531 main.go:141] libmachine: () Calling .GetMachineName
	I1013 22:08:07.460947   42531 main.go:141] libmachine: (ha-592603-m04) Calling .GetState
	I1013 22:08:07.462738   42531 status.go:371] ha-592603-m04 host status = "Stopped" (err=<nil>)
	I1013 22:08:07.462750   42531 status.go:384] host is not running, skipping remaining checks
	I1013 22:08:07.462755   42531 status.go:176] ha-592603-m04 status: &{Name:ha-592603-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (248.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (99.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-592603 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m38.697540315s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (99.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (83.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 node add --control-plane --alsologtostderr -v 5
E1013 22:10:49.942599   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:10:51.005400   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-592603 node add --control-plane --alsologtostderr -v 5: (1m22.443806551s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-592603 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (83.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                    
x
+
TestJSONOutput/start/Command (85.84s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-317679 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-317679 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m25.835303506s)
--- PASS: TestJSONOutput/start/Command (85.84s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.81s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-317679 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.81s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-317679 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.04s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-317679 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-317679 --output=json --user=testUser: (7.04407701s)
--- PASS: TestJSONOutput/stop/Command (7.04s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-664045 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-664045 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (65.801338ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5a847bd9-3695-46cd-8ab4-c13d04009457","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-664045] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8b6d5391-12b9-4494-9264-b4413d87aa2a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21724"}}
	{"specversion":"1.0","id":"0b3ff869-eb81-4b9a-a96d-7590d5295e7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"71390113-0ae6-4af0-80c4-4e72a9d5a276","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21724-15625/kubeconfig"}}
	{"specversion":"1.0","id":"c499c028-2039-4825-b551-fa9fe70259e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-15625/.minikube"}}
	{"specversion":"1.0","id":"eaed95c6-bd00-4c33-81e2-d7f0e6b4664a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"2d5fec40-4285-465b-a1c9-83ae47e87ac6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f778493d-6693-46d7-a366-610c9e6f563d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-664045" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-664045
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (84.3s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-881615 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-881615 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (41.337945026s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-884247 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-884247 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (40.094775516s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-881615
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-884247
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-884247" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-884247
helpers_test.go:175: Cleaning up "first-881615" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-881615
--- PASS: TestMinikubeProfile (84.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (21.38s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-799090 --memory=3072 --mount-string /tmp/TestMountStartserial2270762751/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-799090 --memory=3072 --mount-string /tmp/TestMountStartserial2270762751/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (20.375243737s)
--- PASS: TestMountStart/serial/StartWithMountFirst (21.38s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-799090 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-799090 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (24.85s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-816349 --memory=3072 --mount-string /tmp/TestMountStartserial2270762751/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-816349 --memory=3072 --mount-string /tmp/TestMountStartserial2270762751/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (23.854639127s)
--- PASS: TestMountStart/serial/StartWithMountSecond (24.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-816349 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-816349 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-799090 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-816349 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-816349 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-816349
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-816349: (1.349763744s)
--- PASS: TestMountStart/serial/Stop (1.35s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (20.57s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-816349
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-816349: (19.571011108s)
--- PASS: TestMountStart/serial/RestartStopped (20.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-816349 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-816349 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (98.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-320444 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1013 22:15:33.027223   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:15:49.942594   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:15:51.005933   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-320444 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m38.338432202s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (98.78s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-320444 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-320444 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-320444 -- rollout status deployment/busybox: (3.903860119s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-320444 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-320444 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-320444 -- exec busybox-7b57f96db7-j5pm4 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-320444 -- exec busybox-7b57f96db7-s6ckx -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-320444 -- exec busybox-7b57f96db7-j5pm4 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-320444 -- exec busybox-7b57f96db7-s6ckx -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-320444 -- exec busybox-7b57f96db7-j5pm4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-320444 -- exec busybox-7b57f96db7-s6ckx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.38s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-320444 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-320444 -- exec busybox-7b57f96db7-j5pm4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-320444 -- exec busybox-7b57f96db7-j5pm4 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-320444 -- exec busybox-7b57f96db7-s6ckx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-320444 -- exec busybox-7b57f96db7-s6ckx -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (43.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-320444 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-320444 -v=5 --alsologtostderr: (42.799885716s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (43.38s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-320444 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.61s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 cp testdata/cp-test.txt multinode-320444:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 ssh -n multinode-320444 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 cp multinode-320444:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2580538722/001/cp-test_multinode-320444.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 ssh -n multinode-320444 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 cp multinode-320444:/home/docker/cp-test.txt multinode-320444-m02:/home/docker/cp-test_multinode-320444_multinode-320444-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 ssh -n multinode-320444 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 ssh -n multinode-320444-m02 "sudo cat /home/docker/cp-test_multinode-320444_multinode-320444-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 cp multinode-320444:/home/docker/cp-test.txt multinode-320444-m03:/home/docker/cp-test_multinode-320444_multinode-320444-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 ssh -n multinode-320444 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 ssh -n multinode-320444-m03 "sudo cat /home/docker/cp-test_multinode-320444_multinode-320444-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 cp testdata/cp-test.txt multinode-320444-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 ssh -n multinode-320444-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 cp multinode-320444-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2580538722/001/cp-test_multinode-320444-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 ssh -n multinode-320444-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 cp multinode-320444-m02:/home/docker/cp-test.txt multinode-320444:/home/docker/cp-test_multinode-320444-m02_multinode-320444.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 ssh -n multinode-320444-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 ssh -n multinode-320444 "sudo cat /home/docker/cp-test_multinode-320444-m02_multinode-320444.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 cp multinode-320444-m02:/home/docker/cp-test.txt multinode-320444-m03:/home/docker/cp-test_multinode-320444-m02_multinode-320444-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 ssh -n multinode-320444-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 ssh -n multinode-320444-m03 "sudo cat /home/docker/cp-test_multinode-320444-m02_multinode-320444-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 cp testdata/cp-test.txt multinode-320444-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 ssh -n multinode-320444-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 cp multinode-320444-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2580538722/001/cp-test_multinode-320444-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 ssh -n multinode-320444-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 cp multinode-320444-m03:/home/docker/cp-test.txt multinode-320444:/home/docker/cp-test_multinode-320444-m03_multinode-320444.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 ssh -n multinode-320444-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 ssh -n multinode-320444 "sudo cat /home/docker/cp-test_multinode-320444-m03_multinode-320444.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 cp multinode-320444-m03:/home/docker/cp-test.txt multinode-320444-m02:/home/docker/cp-test_multinode-320444-m03_multinode-320444-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 ssh -n multinode-320444-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 ssh -n multinode-320444-m02 "sudo cat /home/docker/cp-test_multinode-320444-m03_multinode-320444-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.40s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-320444 node stop m03: (1.654691572s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-320444 status: exit status 7 (443.9334ms)

                                                
                                                
-- stdout --
	multinode-320444
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-320444-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-320444-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-320444 status --alsologtostderr: exit status 7 (435.850169ms)

                                                
                                                
-- stdout --
	multinode-320444
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-320444-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-320444-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 22:18:04.406500   50640 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:18:04.406773   50640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:18:04.406784   50640 out.go:374] Setting ErrFile to fd 2...
	I1013 22:18:04.406790   50640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:18:04.406983   50640 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-15625/.minikube/bin
	I1013 22:18:04.407189   50640 out.go:368] Setting JSON to false
	I1013 22:18:04.407220   50640 mustload.go:65] Loading cluster: multinode-320444
	I1013 22:18:04.407299   50640 notify.go:220] Checking for updates...
	I1013 22:18:04.407626   50640 config.go:182] Loaded profile config "multinode-320444": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:18:04.407641   50640 status.go:174] checking status of multinode-320444 ...
	I1013 22:18:04.408185   50640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 22:18:04.408235   50640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 22:18:04.422282   50640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43001
	I1013 22:18:04.422746   50640 main.go:141] libmachine: () Calling .GetVersion
	I1013 22:18:04.423298   50640 main.go:141] libmachine: Using API Version  1
	I1013 22:18:04.423317   50640 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 22:18:04.423743   50640 main.go:141] libmachine: () Calling .GetMachineName
	I1013 22:18:04.424059   50640 main.go:141] libmachine: (multinode-320444) Calling .GetState
	I1013 22:18:04.426097   50640 status.go:371] multinode-320444 host status = "Running" (err=<nil>)
	I1013 22:18:04.426112   50640 host.go:66] Checking if "multinode-320444" exists ...
	I1013 22:18:04.426451   50640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 22:18:04.426486   50640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 22:18:04.440106   50640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41627
	I1013 22:18:04.440605   50640 main.go:141] libmachine: () Calling .GetVersion
	I1013 22:18:04.441153   50640 main.go:141] libmachine: Using API Version  1
	I1013 22:18:04.441229   50640 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 22:18:04.441556   50640 main.go:141] libmachine: () Calling .GetMachineName
	I1013 22:18:04.441758   50640 main.go:141] libmachine: (multinode-320444) Calling .GetIP
	I1013 22:18:04.445262   50640 main.go:141] libmachine: (multinode-320444) DBG | domain multinode-320444 has defined MAC address 52:54:00:42:ac:ac in network mk-multinode-320444
	I1013 22:18:04.445786   50640 main.go:141] libmachine: (multinode-320444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:ac:ac", ip: ""} in network mk-multinode-320444: {Iface:virbr1 ExpiryTime:2025-10-13 23:15:41 +0000 UTC Type:0 Mac:52:54:00:42:ac:ac Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:multinode-320444 Clientid:01:52:54:00:42:ac:ac}
	I1013 22:18:04.445811   50640 main.go:141] libmachine: (multinode-320444) DBG | domain multinode-320444 has defined IP address 192.168.39.162 and MAC address 52:54:00:42:ac:ac in network mk-multinode-320444
	I1013 22:18:04.446065   50640 host.go:66] Checking if "multinode-320444" exists ...
	I1013 22:18:04.446513   50640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 22:18:04.446557   50640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 22:18:04.460253   50640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33115
	I1013 22:18:04.460772   50640 main.go:141] libmachine: () Calling .GetVersion
	I1013 22:18:04.461257   50640 main.go:141] libmachine: Using API Version  1
	I1013 22:18:04.461280   50640 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 22:18:04.461581   50640 main.go:141] libmachine: () Calling .GetMachineName
	I1013 22:18:04.461756   50640 main.go:141] libmachine: (multinode-320444) Calling .DriverName
	I1013 22:18:04.461958   50640 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:18:04.461988   50640 main.go:141] libmachine: (multinode-320444) Calling .GetSSHHostname
	I1013 22:18:04.465221   50640 main.go:141] libmachine: (multinode-320444) DBG | domain multinode-320444 has defined MAC address 52:54:00:42:ac:ac in network mk-multinode-320444
	I1013 22:18:04.465695   50640 main.go:141] libmachine: (multinode-320444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:ac:ac", ip: ""} in network mk-multinode-320444: {Iface:virbr1 ExpiryTime:2025-10-13 23:15:41 +0000 UTC Type:0 Mac:52:54:00:42:ac:ac Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:multinode-320444 Clientid:01:52:54:00:42:ac:ac}
	I1013 22:18:04.465724   50640 main.go:141] libmachine: (multinode-320444) DBG | domain multinode-320444 has defined IP address 192.168.39.162 and MAC address 52:54:00:42:ac:ac in network mk-multinode-320444
	I1013 22:18:04.465883   50640 main.go:141] libmachine: (multinode-320444) Calling .GetSSHPort
	I1013 22:18:04.466030   50640 main.go:141] libmachine: (multinode-320444) Calling .GetSSHKeyPath
	I1013 22:18:04.466197   50640 main.go:141] libmachine: (multinode-320444) Calling .GetSSHUsername
	I1013 22:18:04.466353   50640 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/multinode-320444/id_rsa Username:docker}
	I1013 22:18:04.553501   50640 ssh_runner.go:195] Run: systemctl --version
	I1013 22:18:04.560126   50640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:18:04.577719   50640 kubeconfig.go:125] found "multinode-320444" server: "https://192.168.39.162:8443"
	I1013 22:18:04.577759   50640 api_server.go:166] Checking apiserver status ...
	I1013 22:18:04.577810   50640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:18:04.598686   50640 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1359/cgroup
	W1013 22:18:04.611079   50640 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1359/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1013 22:18:04.611132   50640 ssh_runner.go:195] Run: ls
	I1013 22:18:04.616511   50640 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1013 22:18:04.622684   50640 api_server.go:279] https://192.168.39.162:8443/healthz returned 200:
	ok
	I1013 22:18:04.622708   50640 status.go:463] multinode-320444 apiserver status = Running (err=<nil>)
	I1013 22:18:04.622717   50640 status.go:176] multinode-320444 status: &{Name:multinode-320444 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 22:18:04.622736   50640 status.go:174] checking status of multinode-320444-m02 ...
	I1013 22:18:04.623059   50640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 22:18:04.623091   50640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 22:18:04.636633   50640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37405
	I1013 22:18:04.637069   50640 main.go:141] libmachine: () Calling .GetVersion
	I1013 22:18:04.637549   50640 main.go:141] libmachine: Using API Version  1
	I1013 22:18:04.637579   50640 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 22:18:04.637953   50640 main.go:141] libmachine: () Calling .GetMachineName
	I1013 22:18:04.638201   50640 main.go:141] libmachine: (multinode-320444-m02) Calling .GetState
	I1013 22:18:04.640019   50640 status.go:371] multinode-320444-m02 host status = "Running" (err=<nil>)
	I1013 22:18:04.640037   50640 host.go:66] Checking if "multinode-320444-m02" exists ...
	I1013 22:18:04.640366   50640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 22:18:04.640403   50640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 22:18:04.653834   50640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32793
	I1013 22:18:04.654322   50640 main.go:141] libmachine: () Calling .GetVersion
	I1013 22:18:04.654753   50640 main.go:141] libmachine: Using API Version  1
	I1013 22:18:04.654777   50640 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 22:18:04.655081   50640 main.go:141] libmachine: () Calling .GetMachineName
	I1013 22:18:04.655294   50640 main.go:141] libmachine: (multinode-320444-m02) Calling .GetIP
	I1013 22:18:04.658586   50640 main.go:141] libmachine: (multinode-320444-m02) DBG | domain multinode-320444-m02 has defined MAC address 52:54:00:ce:af:a1 in network mk-multinode-320444
	I1013 22:18:04.659068   50640 main.go:141] libmachine: (multinode-320444-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:a1", ip: ""} in network mk-multinode-320444: {Iface:virbr1 ExpiryTime:2025-10-13 23:16:36 +0000 UTC Type:0 Mac:52:54:00:ce:af:a1 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:multinode-320444-m02 Clientid:01:52:54:00:ce:af:a1}
	I1013 22:18:04.659089   50640 main.go:141] libmachine: (multinode-320444-m02) DBG | domain multinode-320444-m02 has defined IP address 192.168.39.136 and MAC address 52:54:00:ce:af:a1 in network mk-multinode-320444
	I1013 22:18:04.659252   50640 host.go:66] Checking if "multinode-320444-m02" exists ...
	I1013 22:18:04.659531   50640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 22:18:04.659564   50640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 22:18:04.674347   50640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33701
	I1013 22:18:04.674723   50640 main.go:141] libmachine: () Calling .GetVersion
	I1013 22:18:04.675185   50640 main.go:141] libmachine: Using API Version  1
	I1013 22:18:04.675213   50640 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 22:18:04.675539   50640 main.go:141] libmachine: () Calling .GetMachineName
	I1013 22:18:04.675729   50640 main.go:141] libmachine: (multinode-320444-m02) Calling .DriverName
	I1013 22:18:04.675904   50640 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:18:04.675935   50640 main.go:141] libmachine: (multinode-320444-m02) Calling .GetSSHHostname
	I1013 22:18:04.679051   50640 main.go:141] libmachine: (multinode-320444-m02) DBG | domain multinode-320444-m02 has defined MAC address 52:54:00:ce:af:a1 in network mk-multinode-320444
	I1013 22:18:04.679530   50640 main.go:141] libmachine: (multinode-320444-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:af:a1", ip: ""} in network mk-multinode-320444: {Iface:virbr1 ExpiryTime:2025-10-13 23:16:36 +0000 UTC Type:0 Mac:52:54:00:ce:af:a1 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:multinode-320444-m02 Clientid:01:52:54:00:ce:af:a1}
	I1013 22:18:04.679562   50640 main.go:141] libmachine: (multinode-320444-m02) DBG | domain multinode-320444-m02 has defined IP address 192.168.39.136 and MAC address 52:54:00:ce:af:a1 in network mk-multinode-320444
	I1013 22:18:04.679691   50640 main.go:141] libmachine: (multinode-320444-m02) Calling .GetSSHPort
	I1013 22:18:04.680014   50640 main.go:141] libmachine: (multinode-320444-m02) Calling .GetSSHKeyPath
	I1013 22:18:04.680186   50640 main.go:141] libmachine: (multinode-320444-m02) Calling .GetSSHUsername
	I1013 22:18:04.680348   50640 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21724-15625/.minikube/machines/multinode-320444-m02/id_rsa Username:docker}
	I1013 22:18:04.760674   50640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:18:04.777208   50640 status.go:176] multinode-320444-m02 status: &{Name:multinode-320444-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1013 22:18:04.777248   50640 status.go:174] checking status of multinode-320444-m03 ...
	I1013 22:18:04.777570   50640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 22:18:04.777636   50640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 22:18:04.792117   50640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32901
	I1013 22:18:04.792621   50640 main.go:141] libmachine: () Calling .GetVersion
	I1013 22:18:04.793030   50640 main.go:141] libmachine: Using API Version  1
	I1013 22:18:04.793053   50640 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 22:18:04.793475   50640 main.go:141] libmachine: () Calling .GetMachineName
	I1013 22:18:04.793663   50640 main.go:141] libmachine: (multinode-320444-m03) Calling .GetState
	I1013 22:18:04.795896   50640 status.go:371] multinode-320444-m03 host status = "Stopped" (err=<nil>)
	I1013 22:18:04.795909   50640 status.go:384] host is not running, skipping remaining checks
	I1013 22:18:04.795914   50640 status.go:176] multinode-320444-m03 status: &{Name:multinode-320444-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.54s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (37.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-320444 node start m03 -v=5 --alsologtostderr: (37.130695203s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (37.76s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (296.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-320444
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-320444
E1013 22:20:49.951458   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:20:51.005890   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-320444: (2m42.629673824s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-320444 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-320444 --wait=true -v=5 --alsologtostderr: (2m13.997957262s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-320444
--- PASS: TestMultiNode/serial/RestartKeepsNodes (296.72s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-320444 node delete m03: (2.340468037s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.90s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (167.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 stop
E1013 22:23:54.074547   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:25:49.950537   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:25:51.006222   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-320444 stop: (2m47.506381688s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-320444 status: exit status 7 (93.19029ms)

                                                
                                                
-- stdout --
	multinode-320444
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-320444-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-320444 status --alsologtostderr: exit status 7 (82.790886ms)

                                                
                                                
-- stdout --
	multinode-320444
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-320444-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 22:26:29.826763   53377 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:26:29.826991   53377 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:26:29.827000   53377 out.go:374] Setting ErrFile to fd 2...
	I1013 22:26:29.827004   53377 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:26:29.827214   53377 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-15625/.minikube/bin
	I1013 22:26:29.827374   53377 out.go:368] Setting JSON to false
	I1013 22:26:29.827399   53377 mustload.go:65] Loading cluster: multinode-320444
	I1013 22:26:29.827447   53377 notify.go:220] Checking for updates...
	I1013 22:26:29.827755   53377 config.go:182] Loaded profile config "multinode-320444": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:26:29.827768   53377 status.go:174] checking status of multinode-320444 ...
	I1013 22:26:29.828175   53377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 22:26:29.828211   53377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 22:26:29.845831   53377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45949
	I1013 22:26:29.846263   53377 main.go:141] libmachine: () Calling .GetVersion
	I1013 22:26:29.846834   53377 main.go:141] libmachine: Using API Version  1
	I1013 22:26:29.846863   53377 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 22:26:29.847232   53377 main.go:141] libmachine: () Calling .GetMachineName
	I1013 22:26:29.847426   53377 main.go:141] libmachine: (multinode-320444) Calling .GetState
	I1013 22:26:29.849116   53377 status.go:371] multinode-320444 host status = "Stopped" (err=<nil>)
	I1013 22:26:29.849129   53377 status.go:384] host is not running, skipping remaining checks
	I1013 22:26:29.849134   53377 status.go:176] multinode-320444 status: &{Name:multinode-320444 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 22:26:29.849152   53377 status.go:174] checking status of multinode-320444-m02 ...
	I1013 22:26:29.849463   53377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1013 22:26:29.849502   53377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1013 22:26:29.863038   53377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39463
	I1013 22:26:29.863441   53377 main.go:141] libmachine: () Calling .GetVersion
	I1013 22:26:29.863809   53377 main.go:141] libmachine: Using API Version  1
	I1013 22:26:29.863829   53377 main.go:141] libmachine: () Calling .SetConfigRaw
	I1013 22:26:29.864195   53377 main.go:141] libmachine: () Calling .GetMachineName
	I1013 22:26:29.864371   53377 main.go:141] libmachine: (multinode-320444-m02) Calling .GetState
	I1013 22:26:29.866020   53377 status.go:371] multinode-320444-m02 host status = "Stopped" (err=<nil>)
	I1013 22:26:29.866041   53377 status.go:384] host is not running, skipping remaining checks
	I1013 22:26:29.866049   53377 status.go:176] multinode-320444-m02 status: &{Name:multinode-320444-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (167.68s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (96.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-320444 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-320444 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m35.459166442s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-320444 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (96.03s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (42.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-320444
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-320444-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-320444-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (61.962262ms)

                                                
                                                
-- stdout --
	* [multinode-320444-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-15625/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-15625/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-320444-m02' is duplicated with machine name 'multinode-320444-m02' in profile 'multinode-320444'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-320444-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-320444-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (41.527492166s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-320444
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-320444: exit status 80 (229.151058ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-320444 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-320444-m03 already exists in multinode-320444-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-320444-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (42.71s)

                                                
                                    
x
+
TestScheduledStopUnix (109.68s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-720174 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-720174 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (38.009508005s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-720174 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-720174 -n scheduled-stop-720174
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-720174 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1013 22:32:02.620174   19947 retry.go:31] will retry after 116.933µs: open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/scheduled-stop-720174/pid: no such file or directory
I1013 22:32:02.621305   19947 retry.go:31] will retry after 102.777µs: open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/scheduled-stop-720174/pid: no such file or directory
I1013 22:32:02.622473   19947 retry.go:31] will retry after 309.187µs: open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/scheduled-stop-720174/pid: no such file or directory
I1013 22:32:02.623600   19947 retry.go:31] will retry after 443.946µs: open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/scheduled-stop-720174/pid: no such file or directory
I1013 22:32:02.624739   19947 retry.go:31] will retry after 297.382µs: open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/scheduled-stop-720174/pid: no such file or directory
I1013 22:32:02.625873   19947 retry.go:31] will retry after 803.632µs: open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/scheduled-stop-720174/pid: no such file or directory
I1013 22:32:02.627002   19947 retry.go:31] will retry after 1.245679ms: open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/scheduled-stop-720174/pid: no such file or directory
I1013 22:32:02.629224   19947 retry.go:31] will retry after 1.65396ms: open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/scheduled-stop-720174/pid: no such file or directory
I1013 22:32:02.631454   19947 retry.go:31] will retry after 2.26775ms: open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/scheduled-stop-720174/pid: no such file or directory
I1013 22:32:02.634661   19947 retry.go:31] will retry after 5.451704ms: open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/scheduled-stop-720174/pid: no such file or directory
I1013 22:32:02.640900   19947 retry.go:31] will retry after 7.920684ms: open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/scheduled-stop-720174/pid: no such file or directory
I1013 22:32:02.649460   19947 retry.go:31] will retry after 11.95655ms: open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/scheduled-stop-720174/pid: no such file or directory
I1013 22:32:02.661819   19947 retry.go:31] will retry after 13.661358ms: open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/scheduled-stop-720174/pid: no such file or directory
I1013 22:32:02.676104   19947 retry.go:31] will retry after 14.224418ms: open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/scheduled-stop-720174/pid: no such file or directory
I1013 22:32:02.692059   19947 retry.go:31] will retry after 19.011641ms: open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/scheduled-stop-720174/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-720174 --cancel-scheduled
E1013 22:32:13.030375   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-720174 -n scheduled-stop-720174
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-720174
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-720174 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-720174
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-720174: exit status 7 (75.81649ms)

                                                
                                                
-- stdout --
	scheduled-stop-720174
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-720174 -n scheduled-stop-720174
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-720174 -n scheduled-stop-720174: exit status 7 (62.705195ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-720174" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-720174
--- PASS: TestScheduledStopUnix (109.68s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (109.36s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1177813120 start -p running-upgrade-410631 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1177813120 start -p running-upgrade-410631 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (46.368189999s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-410631 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1013 22:35:49.943434   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:35:51.005335   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-410631 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m1.622630276s)
helpers_test.go:175: Cleaning up "running-upgrade-410631" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-410631
--- PASS: TestRunningBinaryUpgrade (109.36s)

                                                
                                    
x
+
TestKubernetesUpgrade (241.79s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-766348 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-766348 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m36.5851077s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-766348
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-766348: (2.345046642s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-766348 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-766348 status --format={{.Host}}: exit status 7 (74.927223ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-766348 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-766348 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (57.519878616s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-766348 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-766348 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-766348 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 106 (88.825003ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-766348] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-15625/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-15625/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-766348
	    minikube start -p kubernetes-upgrade-766348 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7663482 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-766348 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-766348 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-766348 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m23.942650687s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-766348" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-766348
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-766348: (1.161472772s)
--- PASS: TestKubernetesUpgrade (241.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-794544 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-794544 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (78.181226ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-794544] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-15625/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-15625/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (85.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-794544 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-794544 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m24.668914234s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-794544 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (85.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-851286 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-851286 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (107.781601ms)

                                                
                                                
-- stdout --
	* [false-851286] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-15625/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-15625/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 22:33:16.852686   57694 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:33:16.852961   57694 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:33:16.852971   57694 out.go:374] Setting ErrFile to fd 2...
	I1013 22:33:16.852978   57694 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:33:16.853237   57694 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-15625/.minikube/bin
	I1013 22:33:16.853734   57694 out.go:368] Setting JSON to false
	I1013 22:33:16.854626   57694 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":8145,"bootTime":1760386652,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1013 22:33:16.854709   57694 start.go:141] virtualization: kvm guest
	I1013 22:33:16.856870   57694 out.go:179] * [false-851286] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1013 22:33:16.858190   57694 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:33:16.858179   57694 notify.go:220] Checking for updates...
	I1013 22:33:16.859459   57694 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:33:16.860720   57694 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-15625/kubeconfig
	I1013 22:33:16.862068   57694 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-15625/.minikube
	I1013 22:33:16.863481   57694 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1013 22:33:16.864916   57694 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:33:16.866859   57694 config.go:182] Loaded profile config "NoKubernetes-794544": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:33:16.867072   57694 config.go:182] Loaded profile config "force-systemd-env-815659": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:33:16.867227   57694 config.go:182] Loaded profile config "offline-crio-787300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:33:16.867343   57694 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:33:16.902687   57694 out.go:179] * Using the kvm2 driver based on user configuration
	I1013 22:33:16.903930   57694 start.go:305] selected driver: kvm2
	I1013 22:33:16.903944   57694 start.go:925] validating driver "kvm2" against <nil>
	I1013 22:33:16.903954   57694 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:33:16.906055   57694 out.go:203] 
	W1013 22:33:16.907266   57694 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1013 22:33:16.908286   57694 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-851286 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-851286

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-851286

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-851286

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-851286

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-851286

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-851286

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-851286

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-851286

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-851286

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-851286

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-851286"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-851286"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-851286"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-851286

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-851286"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-851286"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-851286" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-851286" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-851286" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-851286" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-851286" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-851286" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-851286" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-851286" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-851286"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-851286"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-851286"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-851286"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-851286"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-851286" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-851286" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-851286" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-851286"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-851286"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-851286"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-851286"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-851286"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-851286

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-851286"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-851286"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-851286"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-851286"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-851286"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-851286"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-851286"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-851286"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-851286"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-851286"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-851286"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-851286"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-851286"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-851286"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-851286"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-851286"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-851286"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-851286"

                                                
                                                
----------------------- debugLogs end: false-851286 [took: 2.803267309s] --------------------------------
helpers_test.go:175: Cleaning up "false-851286" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-851286
--- PASS: TestNetworkPlugins/group/false (3.06s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.68s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (161.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1407463323 start -p stopped-upgrade-694787 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1407463323 start -p stopped-upgrade-694787 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m22.158214566s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1407463323 -p stopped-upgrade-694787 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1407463323 -p stopped-upgrade-694787 stop: (1.792048116s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-694787 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-694787 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m17.88147999s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (161.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (31.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-794544 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-794544 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (29.943054012s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-794544 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-794544 status -o json: exit status 2 (250.380259ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-794544","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-794544
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (31.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (55.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-794544 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-794544 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (55.883294732s)
--- PASS: TestNoKubernetes/serial/Start (55.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-794544 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-794544 "sudo systemctl is-active --quiet service kubelet": exit status 1 (215.370633ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (9.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-amd64 profile list: (5.550364255s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:181: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.84092179s)
--- PASS: TestNoKubernetes/serial/ProfileList (9.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-794544
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-794544: (1.439292599s)
--- PASS: TestNoKubernetes/serial/Stop (1.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (37.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-794544 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-794544 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (37.79613176s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (37.80s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.23s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-694787
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-694787: (1.231828956s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.23s)

                                                
                                    
x
+
TestPause/serial/Start (101.88s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-056726 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-056726 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m41.88361999s)
--- PASS: TestPause/serial/Start (101.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-794544 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-794544 "sudo systemctl is-active --quiet service kubelet": exit status 1 (204.09581ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (91.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-851286 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-851286 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m31.805810426s)
--- PASS: TestNetworkPlugins/group/auto/Start (91.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (79.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-851286 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-851286 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m19.994687901s)
--- PASS: TestNetworkPlugins/group/flannel/Start (79.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (88.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-851286 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-851286 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m28.266483233s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (88.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-851286 "pgrep -a kubelet"
I1013 22:39:51.923942   19947 config.go:182] Loaded profile config "auto-851286": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-851286 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-k82f9" [96545551-d877-467f-897f-ba3792ac66f9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-k82f9" [96545551-d877-467f-897f-ba3792ac66f9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004377746s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-fm9kc" [e4b44195-4c42-4f5a-af02-bebcdfd4f0c2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004365495s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-851286 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-851286 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-851286 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-851286 "pgrep -a kubelet"
I1013 22:40:09.051659   19947 config.go:182] Loaded profile config "flannel-851286": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-851286 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-j9k7p" [00a5b177-f322-4cea-b15c-4353a1f8fac3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-j9k7p" [00a5b177-f322-4cea-b15c-4353a1f8fac3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.009356595s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (89.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-851286 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-851286 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m29.806369898s)
--- PASS: TestNetworkPlugins/group/bridge/Start (89.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-851286 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-851286 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-851286 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (74.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-851286 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1013 22:40:49.943087   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:40:51.005750   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-851286 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m14.914594121s)
--- PASS: TestNetworkPlugins/group/calico/Start (74.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-851286 "pgrep -a kubelet"
I1013 22:40:59.978417   19947 config.go:182] Loaded profile config "enable-default-cni-851286": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-851286 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-grt24" [5b417ba0-6439-4965-95fd-6ba9fe9e4591] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-grt24" [5b417ba0-6439-4965-95fd-6ba9fe9e4591] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004819293s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-851286 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-851286 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-851286 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (66.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-851286 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-851286 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m6.101136395s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (66.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (92.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-851286 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-851286 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m32.764584707s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (92.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-851286 "pgrep -a kubelet"
I1013 22:41:49.569863   19947 config.go:182] Loaded profile config "bridge-851286": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-851286 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5hqcb" [2af4138f-b3e8-4068-a415-ba252f4b5bd5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5hqcb" [2af4138f-b3e8-4068-a415-ba252f4b5bd5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.004715112s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-chcgm" [283ef959-d17d-4287-8d86-d495b49ad20c] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-chcgm" [283ef959-d17d-4287-8d86-d495b49ad20c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.109202093s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-851286 "pgrep -a kubelet"
I1013 22:41:57.826878   19947 config.go:182] Loaded profile config "calico-851286": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-851286 replace --force -f testdata/netcat-deployment.yaml
I1013 22:41:58.695850   19947 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4h4cr" [6bfba03e-64c9-4a7f-9fee-e30eb679ed48] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4h4cr" [6bfba03e-64c9-4a7f-9fee-e30eb679ed48] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.009615484s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-851286 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-851286 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-851286 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-851286 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-851286 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-851286 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (97.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-820178 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-820178 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (1m37.43590775s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (97.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (124.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-915944 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-915944 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (2m4.077486484s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (124.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-csfvn" [58799906-45fd-438b-bdf8-ca5316181488] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00549713s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-851286 "pgrep -a kubelet"
I1013 22:42:40.552046   19947 config.go:182] Loaded profile config "kindnet-851286": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-851286 replace --force -f testdata/netcat-deployment.yaml
I1013 22:42:40.808935   19947 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hmtgv" [e699b186-4484-4e82-a912-b07f64c0cb4d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hmtgv" [e699b186-4484-4e82-a912-b07f64c0cb4d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.093907158s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-851286 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-851286 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-851286 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-851286 "pgrep -a kubelet"
I1013 22:43:05.265572   19947 config.go:182] Loaded profile config "custom-flannel-851286": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-851286 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2x48m" [00be5a8d-534e-4908-8a46-0d79a3b7b869] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2x48m" [00be5a8d-534e-4908-8a46-0d79a3b7b869] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004823161s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (94.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-997718 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-997718 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m34.132987562s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (94.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-851286 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-851286 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-851286 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)
E1013 22:47:12.997117   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/calico-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-049346 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-049346 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m0.272746172s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-820178 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b44393f1-baf1-476a-91ef-4718f833ccaa] Pending
helpers_test.go:352: "busybox" [b44393f1-baf1-476a-91ef-4718f833ccaa] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b44393f1-baf1-476a-91ef-4718f833ccaa] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003812291s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-820178 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-820178 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-820178 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.888337019s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-820178 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (73.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-820178 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-820178 --alsologtostderr -v=3: (1m13.612515282s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (73.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-915944 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [374aa94e-260b-4d9d-ba57-218af3d69c33] Pending
helpers_test.go:352: "busybox" [374aa94e-260b-4d9d-ba57-218af3d69c33] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [374aa94e-260b-4d9d-ba57-218af3d69c33] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.002722546s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-915944 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-049346 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [83bc6ae7-c3a8-4e69-9146-8eb7a679e610] Pending
helpers_test.go:352: "busybox" [83bc6ae7-c3a8-4e69-9146-8eb7a679e610] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [83bc6ae7-c3a8-4e69-9146-8eb7a679e610] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.005035884s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-049346 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-997718 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f2d54f75-5fc1-404a-91e6-512645b2e1f6] Pending
helpers_test.go:352: "busybox" [f2d54f75-5fc1-404a-91e6-512645b2e1f6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f2d54f75-5fc1-404a-91e6-512645b2e1f6] Running
E1013 22:44:52.147686   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:44:52.154138   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:44:52.165530   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:44:52.186907   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:44:52.228332   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:44:52.309788   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.00448729s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-997718 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-915944 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-915944 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (78.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-915944 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-915944 --alsologtostderr -v=3: (1m18.207037548s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (78.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-049346 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-049346 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (75.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-049346 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-049346 --alsologtostderr -v=3: (1m15.168785233s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (75.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-997718 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1013 22:44:52.471615   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:44:52.793692   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-997718 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (89.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-997718 --alsologtostderr -v=3
E1013 22:44:53.435972   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:44:54.718206   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:44:57.279861   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:45:02.402126   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:45:02.808905   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/flannel-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:45:02.815375   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/flannel-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:45:02.826775   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/flannel-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:45:02.848198   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/flannel-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:45:02.889665   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/flannel-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:45:02.971176   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/flannel-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:45:03.132773   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/flannel-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:45:03.454419   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/flannel-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:45:04.096532   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/flannel-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:45:05.378704   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/flannel-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:45:07.940861   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/flannel-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:45:12.644280   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:45:13.062620   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/flannel-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:45:23.303970   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/flannel-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-997718 --alsologtostderr -v=3: (1m29.365319264s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (89.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-820178 -n old-k8s-version-820178
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-820178 -n old-k8s-version-820178: exit status 7 (72.46203ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-820178 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (43.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-820178 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
E1013 22:45:33.125903   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:45:43.785337   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/flannel-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:45:49.942959   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/addons-323324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:45:51.005386   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/functional-613120/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-820178 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (43.605442154s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-820178 -n old-k8s-version-820178
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (43.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-049346 -n default-k8s-diff-port-049346
E1013 22:46:00.260055   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/enable-default-cni-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:46:00.266451   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/enable-default-cni-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:46:00.277900   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/enable-default-cni-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:46:00.299377   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/enable-default-cni-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-049346 -n default-k8s-diff-port-049346: exit status 7 (76.611147ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-049346 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1013 22:46:00.340727   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/enable-default-cni-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:46:00.422541   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/enable-default-cni-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (47.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-049346 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1013 22:46:00.583866   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/enable-default-cni-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:46:00.905321   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/enable-default-cni-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:46:01.547386   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/enable-default-cni-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-049346 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (47.455075625s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-049346 -n default-k8s-diff-port-049346
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (47.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-915944 -n no-preload-915944
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-915944 -n no-preload-915944: exit status 7 (70.809815ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-915944 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (78.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-915944 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1013 22:46:02.828711   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/enable-default-cni-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:46:05.390068   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/enable-default-cni-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-915944 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m18.503319595s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-915944 -n no-preload-915944
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (78.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-qvzpm" [3106c851-7964-42e4-ae75-3c5e8ded9e2f] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1013 22:46:10.511619   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/enable-default-cni-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:46:14.088013   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-qvzpm" [3106c851-7964-42e4-ae75-3c5e8ded9e2f] Running
E1013 22:46:20.753988   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/enable-default-cni-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.006173973s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-qvzpm" [3106c851-7964-42e4-ae75-3c5e8ded9e2f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005118827s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-820178 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-997718 -n embed-certs-997718
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-997718 -n embed-certs-997718: exit status 7 (91.991001ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-997718 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (60.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-997718 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1013 22:46:24.746886   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/flannel-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-997718 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (59.872850851s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-997718 -n embed-certs-997718
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (60.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-820178 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-820178 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-820178 --alsologtostderr -v=1: (1.106668489s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-820178 -n old-k8s-version-820178
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-820178 -n old-k8s-version-820178: exit status 2 (319.189419ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-820178 -n old-k8s-version-820178
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-820178 -n old-k8s-version-820178: exit status 2 (319.762939ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-820178 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-820178 -n old-k8s-version-820178
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-820178 -n old-k8s-version-820178
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (75.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-221451 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1013 22:46:41.235774   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/enable-default-cni-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-221451 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m15.111829903s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (75.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-w9v5r" [a44a8c22-4596-4337-a352-44b6d3e2659e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1013 22:46:49.833107   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/bridge-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:46:49.839481   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/bridge-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:46:49.850911   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/bridge-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:46:49.872355   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/bridge-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:46:49.914270   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/bridge-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:46:49.995951   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/bridge-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:46:50.157608   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/bridge-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:46:50.479024   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/bridge-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:46:51.120914   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/bridge-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:46:52.402831   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/bridge-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:46:52.500212   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/calico-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:46:52.506832   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/calico-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:46:52.518319   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/calico-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:46:52.540353   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/calico-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:46:52.581879   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/calico-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:46:52.663207   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/calico-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:46:52.825372   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/calico-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:46:53.147244   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/calico-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:46:53.789129   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/calico-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-w9v5r" [a44a8c22-4596-4337-a352-44b6d3e2659e] Running
E1013 22:46:54.964895   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/bridge-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:46:55.071211   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/calico-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:46:57.632976   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/calico-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:47:00.086767   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/bridge-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.004359809s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-w9v5r" [a44a8c22-4596-4337-a352-44b6d3e2659e] Running
E1013 22:47:02.755215   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/calico-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005368907s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-049346 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-049346 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-049346 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-049346 --alsologtostderr -v=1: (1.218040277s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-049346 -n default-k8s-diff-port-049346
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-049346 -n default-k8s-diff-port-049346: exit status 2 (356.158717ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-049346 -n default-k8s-diff-port-049346
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-049346 -n default-k8s-diff-port-049346: exit status 2 (358.089789ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-049346 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p default-k8s-diff-port-049346 --alsologtostderr -v=1: (1.081930985s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-049346 -n default-k8s-diff-port-049346
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-049346 -n default-k8s-diff-port-049346
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zq7lv" [111d04b7-777d-4cc0-b620-7a6dc8189a5a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1013 22:47:22.197442   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/enable-default-cni-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zq7lv" [111d04b7-777d-4cc0-b620-7a6dc8189a5a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.004392797s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-25pk4" [20e14e6f-5f01-4050-beac-33417c68efc4] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-25pk4" [20e14e6f-5f01-4050-beac-33417c68efc4] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.005717601s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zq7lv" [111d04b7-777d-4cc0-b620-7a6dc8189a5a] Running
E1013 22:47:30.810688   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/bridge-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005981135s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-915944 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-25pk4" [20e14e6f-5f01-4050-beac-33417c68efc4] Running
E1013 22:47:33.479089   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/calico-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:47:34.321685   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/kindnet-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:47:34.328061   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/kindnet-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:47:34.339410   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/kindnet-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:47:34.360811   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/kindnet-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:47:34.402249   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/kindnet-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:47:34.483715   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/kindnet-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:47:34.645356   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/kindnet-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:47:34.967276   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/kindnet-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005373962s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-997718 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-915944 image list --format=json
E1013 22:47:35.609585   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/kindnet-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-915944 --alsologtostderr -v=1
E1013 22:47:36.009950   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/auto-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-915944 -n no-preload-915944
E1013 22:47:36.891830   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/kindnet-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-915944 -n no-preload-915944: exit status 2 (281.230203ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-915944 -n no-preload-915944
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-915944 -n no-preload-915944: exit status 2 (284.265963ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-915944 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-915944 -n no-preload-915944
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-915944 -n no-preload-915944
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-997718 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-997718 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-997718 --alsologtostderr -v=1: (1.064555791s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-997718 -n embed-certs-997718
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-997718 -n embed-certs-997718: exit status 2 (288.579057ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-997718 -n embed-certs-997718
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-997718 -n embed-certs-997718: exit status 2 (288.16566ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-997718 --alsologtostderr -v=1
E1013 22:47:39.453752   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/kindnet-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-997718 -n embed-certs-997718
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-997718 -n embed-certs-997718
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-221451 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-221451 --alsologtostderr -v=3
E1013 22:47:54.816881   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/kindnet-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-221451 --alsologtostderr -v=3: (7.445949386s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-221451 -n newest-cni-221451
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-221451 -n newest-cni-221451: exit status 7 (64.313926ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-221451 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-221451 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1013 22:48:05.515186   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/custom-flannel-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:48:05.521701   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/custom-flannel-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:48:05.533150   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/custom-flannel-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:48:05.554606   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/custom-flannel-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:48:05.596100   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/custom-flannel-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:48:05.677686   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/custom-flannel-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:48:05.839329   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/custom-flannel-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:48:06.161108   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/custom-flannel-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:48:06.803208   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/custom-flannel-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:48:08.085350   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/custom-flannel-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:48:10.646830   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/custom-flannel-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:48:11.772335   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/bridge-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:48:14.441385   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/calico-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:48:15.298427   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/kindnet-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:48:15.768961   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/custom-flannel-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:48:26.011274   19947 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-15625/.minikube/profiles/custom-flannel-851286/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-221451 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (36.140063805s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-221451 -n newest-cni-221451
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-221451 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-221451 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-221451 --alsologtostderr -v=1: (1.053677063s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-221451 -n newest-cni-221451
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-221451 -n newest-cni-221451: exit status 2 (288.637948ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-221451 -n newest-cni-221451
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-221451 -n newest-cni-221451: exit status 2 (299.595578ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-221451 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-221451 -n newest-cni-221451
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-221451 -n newest-cni-221451
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.32s)

                                                
                                    

Test skip (40/324)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.3
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
117 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
119 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
123 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
158 TestGvisorAddon 0
180 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
258 TestNetworkPlugins/group/kubenet 2.87
267 TestNetworkPlugins/group/cilium 3.21
282 TestStartStop/group/disable-driver-mounts 0.18
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323324 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-851286 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-851286

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-851286

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-851286

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-851286

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-851286

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-851286

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-851286

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-851286

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-851286

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-851286

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-851286"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-851286"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-851286"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-851286

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-851286"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-851286"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-851286" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-851286" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-851286" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-851286" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-851286" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-851286" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-851286" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-851286" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-851286"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-851286"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-851286"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-851286"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-851286"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-851286" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-851286" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-851286" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-851286"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-851286"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-851286"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-851286"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-851286"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-851286

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-851286"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-851286"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-851286"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-851286"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-851286"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-851286"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-851286"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-851286"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-851286"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-851286"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-851286"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-851286"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-851286"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-851286"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-851286"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-851286"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-851286"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-851286"

                                                
                                                
----------------------- debugLogs end: kubenet-851286 [took: 2.735243179s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-851286" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-851286
--- SKIP: TestNetworkPlugins/group/kubenet (2.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-851286 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-851286

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-851286

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-851286

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-851286

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-851286

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-851286

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-851286

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-851286

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-851286

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-851286

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-851286"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-851286"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-851286"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-851286

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-851286"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-851286"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-851286" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-851286" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-851286" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-851286" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-851286" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-851286" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-851286" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-851286" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-851286"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-851286"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-851286"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-851286"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-851286"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-851286

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-851286

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-851286" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-851286" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-851286

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-851286

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-851286" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-851286" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-851286" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-851286" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-851286" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-851286"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-851286"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-851286"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-851286"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-851286"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-851286

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-851286"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-851286"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-851286"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-851286"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-851286"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-851286"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-851286"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-851286"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-851286"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-851286"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-851286"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-851286"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-851286"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-851286"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-851286"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-851286"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-851286"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-851286" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-851286"

                                                
                                                
----------------------- debugLogs end: cilium-851286 [took: 3.054084831s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-851286" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-851286
--- SKIP: TestNetworkPlugins/group/cilium (3.21s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-412577" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-412577
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard