Test Report: KVM_Linux_crio 21664

                    
                      fca5789b7681da792c5737c174f2f0168409bc21:2025-10-17:41948
                    
                

Test fail (2/330)

Order failed test Duration
37 TestAddons/parallel/Ingress 158.63
244 TestPreload 131.99
x
+
TestAddons/parallel/Ingress (158.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-322722 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-322722 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-322722 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [9731b390-9ba2-425a-887f-65322578dfef] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [9731b390-9ba2-425a-887f-65322578dfef] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.011214549s
I1017 19:26:32.712383  113592 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-322722 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-322722 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m15.123762439s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-322722 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-322722 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.86
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-322722 -n addons-322722
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-322722 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-322722 logs -n 25: (1.355308178s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                ARGS                                                                                                                                                                                                                                                │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-651643                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-651643 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ start   │ --download-only -p binary-mirror-717550 --alsologtostderr --binary-mirror http://127.0.0.1:46365 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-717550 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │                     │
	│ delete  │ -p binary-mirror-717550                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ binary-mirror-717550 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ addons  │ disable dashboard -p addons-322722                                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-322722        │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │                     │
	│ addons  │ enable dashboard -p addons-322722                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-322722        │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │                     │
	│ start   │ -p addons-322722 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-322722        │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:25 UTC │
	│ addons  │ addons-322722 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-322722        │ jenkins │ v1.37.0 │ 17 Oct 25 19:25 UTC │ 17 Oct 25 19:25 UTC │
	│ addons  │ addons-322722 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-322722        │ jenkins │ v1.37.0 │ 17 Oct 25 19:25 UTC │ 17 Oct 25 19:26 UTC │
	│ addons  │ enable headlamp -p addons-322722 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-322722        │ jenkins │ v1.37.0 │ 17 Oct 25 19:26 UTC │ 17 Oct 25 19:26 UTC │
	│ addons  │ addons-322722 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-322722        │ jenkins │ v1.37.0 │ 17 Oct 25 19:26 UTC │ 17 Oct 25 19:26 UTC │
	│ addons  │ addons-322722 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-322722        │ jenkins │ v1.37.0 │ 17 Oct 25 19:26 UTC │ 17 Oct 25 19:26 UTC │
	│ addons  │ addons-322722 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-322722        │ jenkins │ v1.37.0 │ 17 Oct 25 19:26 UTC │ 17 Oct 25 19:26 UTC │
	│ addons  │ addons-322722 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-322722        │ jenkins │ v1.37.0 │ 17 Oct 25 19:26 UTC │ 17 Oct 25 19:26 UTC │
	│ ip      │ addons-322722 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-322722        │ jenkins │ v1.37.0 │ 17 Oct 25 19:26 UTC │ 17 Oct 25 19:26 UTC │
	│ addons  │ addons-322722 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-322722        │ jenkins │ v1.37.0 │ 17 Oct 25 19:26 UTC │ 17 Oct 25 19:26 UTC │
	│ addons  │ addons-322722 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-322722        │ jenkins │ v1.37.0 │ 17 Oct 25 19:26 UTC │ 17 Oct 25 19:26 UTC │
	│ ssh     │ addons-322722 ssh cat /opt/local-path-provisioner/pvc-693455d1-f7f2-4ada-abe5-ab11ca9f9218_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                                                  │ addons-322722        │ jenkins │ v1.37.0 │ 17 Oct 25 19:26 UTC │ 17 Oct 25 19:26 UTC │
	│ addons  │ addons-322722 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-322722        │ jenkins │ v1.37.0 │ 17 Oct 25 19:26 UTC │ 17 Oct 25 19:27 UTC │
	│ addons  │ addons-322722 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-322722        │ jenkins │ v1.37.0 │ 17 Oct 25 19:26 UTC │ 17 Oct 25 19:26 UTC │
	│ ssh     │ addons-322722 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-322722        │ jenkins │ v1.37.0 │ 17 Oct 25 19:26 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-322722                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-322722        │ jenkins │ v1.37.0 │ 17 Oct 25 19:26 UTC │ 17 Oct 25 19:26 UTC │
	│ addons  │ addons-322722 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-322722        │ jenkins │ v1.37.0 │ 17 Oct 25 19:26 UTC │ 17 Oct 25 19:26 UTC │
	│ addons  │ addons-322722 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-322722        │ jenkins │ v1.37.0 │ 17 Oct 25 19:26 UTC │ 17 Oct 25 19:26 UTC │
	│ addons  │ addons-322722 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-322722        │ jenkins │ v1.37.0 │ 17 Oct 25 19:26 UTC │ 17 Oct 25 19:27 UTC │
	│ ip      │ addons-322722 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-322722        │ jenkins │ v1.37.0 │ 17 Oct 25 19:28 UTC │ 17 Oct 25 19:28 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:22:20
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:22:20.931423  114312 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:22:20.931708  114312 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:22:20.931720  114312 out.go:374] Setting ErrFile to fd 2...
	I1017 19:22:20.931724  114312 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:22:20.931937  114312 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-109682/.minikube/bin
	I1017 19:22:20.932472  114312 out.go:368] Setting JSON to false
	I1017 19:22:20.933455  114312 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":3882,"bootTime":1760725059,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:22:20.933570  114312 start.go:141] virtualization: kvm guest
	I1017 19:22:20.935272  114312 out.go:179] * [addons-322722] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 19:22:20.936467  114312 notify.go:220] Checking for updates...
	I1017 19:22:20.936486  114312 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 19:22:20.937783  114312 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:22:20.939154  114312 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-109682/kubeconfig
	I1017 19:22:20.944145  114312 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-109682/.minikube
	I1017 19:22:20.945410  114312 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 19:22:20.946574  114312 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:22:20.947747  114312 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:22:20.980648  114312 out.go:179] * Using the kvm2 driver based on user configuration
	I1017 19:22:20.981836  114312 start.go:305] selected driver: kvm2
	I1017 19:22:20.981862  114312 start.go:925] validating driver "kvm2" against <nil>
	I1017 19:22:20.981877  114312 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:22:20.982552  114312 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:22:20.982622  114312 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21664-109682/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1017 19:22:20.997321  114312 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1017 19:22:20.997353  114312 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21664-109682/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1017 19:22:21.012225  114312 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1017 19:22:21.012276  114312 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 19:22:21.012496  114312 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:22:21.012521  114312 cni.go:84] Creating CNI manager for ""
	I1017 19:22:21.012547  114312 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1017 19:22:21.012553  114312 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1017 19:22:21.012598  114312 start.go:349] cluster config:
	{Name:addons-322722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-322722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1017 19:22:21.012698  114312 iso.go:125] acquiring lock: {Name:mk2487fdd858c1cb489b6312535f031f58d5b643 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:22:21.015071  114312 out.go:179] * Starting "addons-322722" primary control-plane node in "addons-322722" cluster
	I1017 19:22:21.016149  114312 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:22:21.016181  114312 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-109682/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 19:22:21.016191  114312 cache.go:58] Caching tarball of preloaded images
	I1017 19:22:21.016256  114312 preload.go:233] Found /home/jenkins/minikube-integration/21664-109682/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1017 19:22:21.016266  114312 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:22:21.016518  114312 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/config.json ...
	I1017 19:22:21.016538  114312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/config.json: {Name:mke85047ea76d6504f86e9f2bf03d88bb17ba70c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:22:21.016671  114312 start.go:360] acquireMachinesLock for addons-322722: {Name:mkcde7cc25d2fb2130f0f72f7c9bd6675341a268 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1017 19:22:21.016717  114312 start.go:364] duration metric: took 33.31µs to acquireMachinesLock for "addons-322722"
	I1017 19:22:21.016735  114312 start.go:93] Provisioning new machine with config: &{Name:addons-322722 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-322722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:22:21.016781  114312 start.go:125] createHost starting for "" (driver="kvm2")
	I1017 19:22:21.018365  114312 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1017 19:22:21.018480  114312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:22:21.018508  114312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:22:21.031248  114312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42861
	I1017 19:22:21.031826  114312 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:22:21.032406  114312 main.go:141] libmachine: Using API Version  1
	I1017 19:22:21.032428  114312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:22:21.032823  114312 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:22:21.033016  114312 main.go:141] libmachine: (addons-322722) Calling .GetMachineName
	I1017 19:22:21.033161  114312 main.go:141] libmachine: (addons-322722) Calling .DriverName
	I1017 19:22:21.033327  114312 start.go:159] libmachine.API.Create for "addons-322722" (driver="kvm2")
	I1017 19:22:21.033366  114312 client.go:168] LocalClient.Create starting
	I1017 19:22:21.033411  114312 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21664-109682/.minikube/certs/ca.pem
	I1017 19:22:21.353235  114312 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21664-109682/.minikube/certs/cert.pem
	I1017 19:22:21.412564  114312 main.go:141] libmachine: Running pre-create checks...
	I1017 19:22:21.412593  114312 main.go:141] libmachine: (addons-322722) Calling .PreCreateCheck
	I1017 19:22:21.413111  114312 main.go:141] libmachine: (addons-322722) Calling .GetConfigRaw
	I1017 19:22:21.413605  114312 main.go:141] libmachine: Creating machine...
	I1017 19:22:21.413646  114312 main.go:141] libmachine: (addons-322722) Calling .Create
	I1017 19:22:21.413870  114312 main.go:141] libmachine: (addons-322722) creating domain...
	I1017 19:22:21.413904  114312 main.go:141] libmachine: (addons-322722) creating network...
	I1017 19:22:21.415312  114312 main.go:141] libmachine: (addons-322722) DBG | found existing default network
	I1017 19:22:21.415588  114312 main.go:141] libmachine: (addons-322722) DBG | <network>
	I1017 19:22:21.415610  114312 main.go:141] libmachine: (addons-322722) DBG |   <name>default</name>
	I1017 19:22:21.415622  114312 main.go:141] libmachine: (addons-322722) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1017 19:22:21.415642  114312 main.go:141] libmachine: (addons-322722) DBG |   <forward mode='nat'>
	I1017 19:22:21.415668  114312 main.go:141] libmachine: (addons-322722) DBG |     <nat>
	I1017 19:22:21.415699  114312 main.go:141] libmachine: (addons-322722) DBG |       <port start='1024' end='65535'/>
	I1017 19:22:21.415712  114312 main.go:141] libmachine: (addons-322722) DBG |     </nat>
	I1017 19:22:21.415721  114312 main.go:141] libmachine: (addons-322722) DBG |   </forward>
	I1017 19:22:21.415729  114312 main.go:141] libmachine: (addons-322722) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1017 19:22:21.415739  114312 main.go:141] libmachine: (addons-322722) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1017 19:22:21.415779  114312 main.go:141] libmachine: (addons-322722) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1017 19:22:21.415805  114312 main.go:141] libmachine: (addons-322722) DBG |     <dhcp>
	I1017 19:22:21.415817  114312 main.go:141] libmachine: (addons-322722) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1017 19:22:21.415830  114312 main.go:141] libmachine: (addons-322722) DBG |     </dhcp>
	I1017 19:22:21.415838  114312 main.go:141] libmachine: (addons-322722) DBG |   </ip>
	I1017 19:22:21.415859  114312 main.go:141] libmachine: (addons-322722) DBG | </network>
	I1017 19:22:21.415889  114312 main.go:141] libmachine: (addons-322722) DBG | 
	I1017 19:22:21.416413  114312 main.go:141] libmachine: (addons-322722) DBG | I1017 19:22:21.416257  114341 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000012870}
	I1017 19:22:21.416486  114312 main.go:141] libmachine: (addons-322722) DBG | defining private network:
	I1017 19:22:21.416508  114312 main.go:141] libmachine: (addons-322722) DBG | 
	I1017 19:22:21.416530  114312 main.go:141] libmachine: (addons-322722) DBG | <network>
	I1017 19:22:21.416546  114312 main.go:141] libmachine: (addons-322722) DBG |   <name>mk-addons-322722</name>
	I1017 19:22:21.416555  114312 main.go:141] libmachine: (addons-322722) DBG |   <dns enable='no'/>
	I1017 19:22:21.416567  114312 main.go:141] libmachine: (addons-322722) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1017 19:22:21.416579  114312 main.go:141] libmachine: (addons-322722) DBG |     <dhcp>
	I1017 19:22:21.416591  114312 main.go:141] libmachine: (addons-322722) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1017 19:22:21.416599  114312 main.go:141] libmachine: (addons-322722) DBG |     </dhcp>
	I1017 19:22:21.416603  114312 main.go:141] libmachine: (addons-322722) DBG |   </ip>
	I1017 19:22:21.416609  114312 main.go:141] libmachine: (addons-322722) DBG | </network>
	I1017 19:22:21.416615  114312 main.go:141] libmachine: (addons-322722) DBG | 
	I1017 19:22:21.422622  114312 main.go:141] libmachine: (addons-322722) DBG | creating private network mk-addons-322722 192.168.39.0/24...
	I1017 19:22:21.491534  114312 main.go:141] libmachine: (addons-322722) DBG | private network mk-addons-322722 192.168.39.0/24 created
	I1017 19:22:21.491900  114312 main.go:141] libmachine: (addons-322722) DBG | <network>
	I1017 19:22:21.491921  114312 main.go:141] libmachine: (addons-322722) DBG |   <name>mk-addons-322722</name>
	I1017 19:22:21.491934  114312 main.go:141] libmachine: (addons-322722) DBG |   <uuid>5ab24578-0208-489b-a9a7-2989acaf112c</uuid>
	I1017 19:22:21.491942  114312 main.go:141] libmachine: (addons-322722) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I1017 19:22:21.491952  114312 main.go:141] libmachine: (addons-322722) DBG |   <mac address='52:54:00:3d:3f:99'/>
	I1017 19:22:21.491960  114312 main.go:141] libmachine: (addons-322722) DBG |   <dns enable='no'/>
	I1017 19:22:21.491975  114312 main.go:141] libmachine: (addons-322722) setting up store path in /home/jenkins/minikube-integration/21664-109682/.minikube/machines/addons-322722 ...
	I1017 19:22:21.492001  114312 main.go:141] libmachine: (addons-322722) building disk image from file:///home/jenkins/minikube-integration/21664-109682/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1017 19:22:21.492017  114312 main.go:141] libmachine: (addons-322722) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1017 19:22:21.492025  114312 main.go:141] libmachine: (addons-322722) DBG |     <dhcp>
	I1017 19:22:21.492051  114312 main.go:141] libmachine: (addons-322722) Downloading /home/jenkins/minikube-integration/21664-109682/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21664-109682/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso...
	I1017 19:22:21.492079  114312 main.go:141] libmachine: (addons-322722) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1017 19:22:21.492092  114312 main.go:141] libmachine: (addons-322722) DBG |     </dhcp>
	I1017 19:22:21.492102  114312 main.go:141] libmachine: (addons-322722) DBG |   </ip>
	I1017 19:22:21.492113  114312 main.go:141] libmachine: (addons-322722) DBG | </network>
	I1017 19:22:21.492121  114312 main.go:141] libmachine: (addons-322722) DBG | 
	I1017 19:22:21.492138  114312 main.go:141] libmachine: (addons-322722) DBG | I1017 19:22:21.491899  114341 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21664-109682/.minikube
	I1017 19:22:21.776327  114312 main.go:141] libmachine: (addons-322722) DBG | I1017 19:22:21.776144  114341 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21664-109682/.minikube/machines/addons-322722/id_rsa...
	I1017 19:22:22.254716  114312 main.go:141] libmachine: (addons-322722) DBG | I1017 19:22:22.254546  114341 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21664-109682/.minikube/machines/addons-322722/addons-322722.rawdisk...
	I1017 19:22:22.254741  114312 main.go:141] libmachine: (addons-322722) DBG | Writing magic tar header
	I1017 19:22:22.254758  114312 main.go:141] libmachine: (addons-322722) DBG | Writing SSH key tar header
	I1017 19:22:22.254768  114312 main.go:141] libmachine: (addons-322722) DBG | I1017 19:22:22.254711  114341 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21664-109682/.minikube/machines/addons-322722 ...
	I1017 19:22:22.254895  114312 main.go:141] libmachine: (addons-322722) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21664-109682/.minikube/machines/addons-322722
	I1017 19:22:22.254926  114312 main.go:141] libmachine: (addons-322722) setting executable bit set on /home/jenkins/minikube-integration/21664-109682/.minikube/machines/addons-322722 (perms=drwx------)
	I1017 19:22:22.254936  114312 main.go:141] libmachine: (addons-322722) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21664-109682/.minikube/machines
	I1017 19:22:22.254950  114312 main.go:141] libmachine: (addons-322722) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21664-109682/.minikube
	I1017 19:22:22.254963  114312 main.go:141] libmachine: (addons-322722) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21664-109682
	I1017 19:22:22.254979  114312 main.go:141] libmachine: (addons-322722) setting executable bit set on /home/jenkins/minikube-integration/21664-109682/.minikube/machines (perms=drwxr-xr-x)
	I1017 19:22:22.254989  114312 main.go:141] libmachine: (addons-322722) setting executable bit set on /home/jenkins/minikube-integration/21664-109682/.minikube (perms=drwxr-xr-x)
	I1017 19:22:22.254996  114312 main.go:141] libmachine: (addons-322722) setting executable bit set on /home/jenkins/minikube-integration/21664-109682 (perms=drwxrwxr-x)
	I1017 19:22:22.255006  114312 main.go:141] libmachine: (addons-322722) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1017 19:22:22.255014  114312 main.go:141] libmachine: (addons-322722) DBG | checking permissions on dir: /home/jenkins
	I1017 19:22:22.255023  114312 main.go:141] libmachine: (addons-322722) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1017 19:22:22.255034  114312 main.go:141] libmachine: (addons-322722) DBG | checking permissions on dir: /home
	I1017 19:22:22.255042  114312 main.go:141] libmachine: (addons-322722) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1017 19:22:22.255055  114312 main.go:141] libmachine: (addons-322722) defining domain...
	I1017 19:22:22.255070  114312 main.go:141] libmachine: (addons-322722) DBG | skipping /home - not owner
	I1017 19:22:22.256520  114312 main.go:141] libmachine: (addons-322722) defining domain using XML: 
	I1017 19:22:22.256547  114312 main.go:141] libmachine: (addons-322722) <domain type='kvm'>
	I1017 19:22:22.256554  114312 main.go:141] libmachine: (addons-322722)   <name>addons-322722</name>
	I1017 19:22:22.256560  114312 main.go:141] libmachine: (addons-322722)   <memory unit='MiB'>4096</memory>
	I1017 19:22:22.256570  114312 main.go:141] libmachine: (addons-322722)   <vcpu>2</vcpu>
	I1017 19:22:22.256574  114312 main.go:141] libmachine: (addons-322722)   <features>
	I1017 19:22:22.256579  114312 main.go:141] libmachine: (addons-322722)     <acpi/>
	I1017 19:22:22.256583  114312 main.go:141] libmachine: (addons-322722)     <apic/>
	I1017 19:22:22.256588  114312 main.go:141] libmachine: (addons-322722)     <pae/>
	I1017 19:22:22.256591  114312 main.go:141] libmachine: (addons-322722)   </features>
	I1017 19:22:22.256596  114312 main.go:141] libmachine: (addons-322722)   <cpu mode='host-passthrough'>
	I1017 19:22:22.256603  114312 main.go:141] libmachine: (addons-322722)   </cpu>
	I1017 19:22:22.256608  114312 main.go:141] libmachine: (addons-322722)   <os>
	I1017 19:22:22.256612  114312 main.go:141] libmachine: (addons-322722)     <type>hvm</type>
	I1017 19:22:22.256617  114312 main.go:141] libmachine: (addons-322722)     <boot dev='cdrom'/>
	I1017 19:22:22.256622  114312 main.go:141] libmachine: (addons-322722)     <boot dev='hd'/>
	I1017 19:22:22.256626  114312 main.go:141] libmachine: (addons-322722)     <bootmenu enable='no'/>
	I1017 19:22:22.256632  114312 main.go:141] libmachine: (addons-322722)   </os>
	I1017 19:22:22.256637  114312 main.go:141] libmachine: (addons-322722)   <devices>
	I1017 19:22:22.256641  114312 main.go:141] libmachine: (addons-322722)     <disk type='file' device='cdrom'>
	I1017 19:22:22.256651  114312 main.go:141] libmachine: (addons-322722)       <source file='/home/jenkins/minikube-integration/21664-109682/.minikube/machines/addons-322722/boot2docker.iso'/>
	I1017 19:22:22.256658  114312 main.go:141] libmachine: (addons-322722)       <target dev='hdc' bus='scsi'/>
	I1017 19:22:22.256662  114312 main.go:141] libmachine: (addons-322722)       <readonly/>
	I1017 19:22:22.256666  114312 main.go:141] libmachine: (addons-322722)     </disk>
	I1017 19:22:22.256671  114312 main.go:141] libmachine: (addons-322722)     <disk type='file' device='disk'>
	I1017 19:22:22.256676  114312 main.go:141] libmachine: (addons-322722)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1017 19:22:22.256687  114312 main.go:141] libmachine: (addons-322722)       <source file='/home/jenkins/minikube-integration/21664-109682/.minikube/machines/addons-322722/addons-322722.rawdisk'/>
	I1017 19:22:22.256694  114312 main.go:141] libmachine: (addons-322722)       <target dev='hda' bus='virtio'/>
	I1017 19:22:22.256700  114312 main.go:141] libmachine: (addons-322722)     </disk>
	I1017 19:22:22.256709  114312 main.go:141] libmachine: (addons-322722)     <interface type='network'>
	I1017 19:22:22.256715  114312 main.go:141] libmachine: (addons-322722)       <source network='mk-addons-322722'/>
	I1017 19:22:22.256724  114312 main.go:141] libmachine: (addons-322722)       <model type='virtio'/>
	I1017 19:22:22.256729  114312 main.go:141] libmachine: (addons-322722)     </interface>
	I1017 19:22:22.256739  114312 main.go:141] libmachine: (addons-322722)     <interface type='network'>
	I1017 19:22:22.256747  114312 main.go:141] libmachine: (addons-322722)       <source network='default'/>
	I1017 19:22:22.256751  114312 main.go:141] libmachine: (addons-322722)       <model type='virtio'/>
	I1017 19:22:22.256757  114312 main.go:141] libmachine: (addons-322722)     </interface>
	I1017 19:22:22.256762  114312 main.go:141] libmachine: (addons-322722)     <serial type='pty'>
	I1017 19:22:22.256767  114312 main.go:141] libmachine: (addons-322722)       <target port='0'/>
	I1017 19:22:22.256773  114312 main.go:141] libmachine: (addons-322722)     </serial>
	I1017 19:22:22.256778  114312 main.go:141] libmachine: (addons-322722)     <console type='pty'>
	I1017 19:22:22.256784  114312 main.go:141] libmachine: (addons-322722)       <target type='serial' port='0'/>
	I1017 19:22:22.256788  114312 main.go:141] libmachine: (addons-322722)     </console>
	I1017 19:22:22.256795  114312 main.go:141] libmachine: (addons-322722)     <rng model='virtio'>
	I1017 19:22:22.256801  114312 main.go:141] libmachine: (addons-322722)       <backend model='random'>/dev/random</backend>
	I1017 19:22:22.256807  114312 main.go:141] libmachine: (addons-322722)     </rng>
	I1017 19:22:22.256811  114312 main.go:141] libmachine: (addons-322722)   </devices>
	I1017 19:22:22.256815  114312 main.go:141] libmachine: (addons-322722) </domain>
	I1017 19:22:22.256884  114312 main.go:141] libmachine: (addons-322722) 
	I1017 19:22:22.266021  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:8b:27:16 in network default
	I1017 19:22:22.266676  114312 main.go:141] libmachine: (addons-322722) starting domain...
	I1017 19:22:22.266694  114312 main.go:141] libmachine: (addons-322722) ensuring networks are active...
	I1017 19:22:22.266706  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:22.267437  114312 main.go:141] libmachine: (addons-322722) Ensuring network default is active
	I1017 19:22:22.267776  114312 main.go:141] libmachine: (addons-322722) Ensuring network mk-addons-322722 is active
	I1017 19:22:22.268342  114312 main.go:141] libmachine: (addons-322722) getting domain XML...
	I1017 19:22:22.269408  114312 main.go:141] libmachine: (addons-322722) DBG | starting domain XML:
	I1017 19:22:22.269433  114312 main.go:141] libmachine: (addons-322722) DBG | <domain type='kvm'>
	I1017 19:22:22.269445  114312 main.go:141] libmachine: (addons-322722) DBG |   <name>addons-322722</name>
	I1017 19:22:22.269458  114312 main.go:141] libmachine: (addons-322722) DBG |   <uuid>a300d917-ae06-4f7e-b56b-46341932225f</uuid>
	I1017 19:22:22.269468  114312 main.go:141] libmachine: (addons-322722) DBG |   <memory unit='KiB'>4194304</memory>
	I1017 19:22:22.269476  114312 main.go:141] libmachine: (addons-322722) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I1017 19:22:22.269487  114312 main.go:141] libmachine: (addons-322722) DBG |   <vcpu placement='static'>2</vcpu>
	I1017 19:22:22.269494  114312 main.go:141] libmachine: (addons-322722) DBG |   <os>
	I1017 19:22:22.269505  114312 main.go:141] libmachine: (addons-322722) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1017 19:22:22.269519  114312 main.go:141] libmachine: (addons-322722) DBG |     <boot dev='cdrom'/>
	I1017 19:22:22.269531  114312 main.go:141] libmachine: (addons-322722) DBG |     <boot dev='hd'/>
	I1017 19:22:22.269538  114312 main.go:141] libmachine: (addons-322722) DBG |     <bootmenu enable='no'/>
	I1017 19:22:22.269547  114312 main.go:141] libmachine: (addons-322722) DBG |   </os>
	I1017 19:22:22.269553  114312 main.go:141] libmachine: (addons-322722) DBG |   <features>
	I1017 19:22:22.269561  114312 main.go:141] libmachine: (addons-322722) DBG |     <acpi/>
	I1017 19:22:22.269571  114312 main.go:141] libmachine: (addons-322722) DBG |     <apic/>
	I1017 19:22:22.269579  114312 main.go:141] libmachine: (addons-322722) DBG |     <pae/>
	I1017 19:22:22.269592  114312 main.go:141] libmachine: (addons-322722) DBG |   </features>
	I1017 19:22:22.269606  114312 main.go:141] libmachine: (addons-322722) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1017 19:22:22.269619  114312 main.go:141] libmachine: (addons-322722) DBG |   <clock offset='utc'/>
	I1017 19:22:22.269630  114312 main.go:141] libmachine: (addons-322722) DBG |   <on_poweroff>destroy</on_poweroff>
	I1017 19:22:22.269640  114312 main.go:141] libmachine: (addons-322722) DBG |   <on_reboot>restart</on_reboot>
	I1017 19:22:22.269651  114312 main.go:141] libmachine: (addons-322722) DBG |   <on_crash>destroy</on_crash>
	I1017 19:22:22.269662  114312 main.go:141] libmachine: (addons-322722) DBG |   <devices>
	I1017 19:22:22.269676  114312 main.go:141] libmachine: (addons-322722) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1017 19:22:22.269686  114312 main.go:141] libmachine: (addons-322722) DBG |     <disk type='file' device='cdrom'>
	I1017 19:22:22.269696  114312 main.go:141] libmachine: (addons-322722) DBG |       <driver name='qemu' type='raw'/>
	I1017 19:22:22.269711  114312 main.go:141] libmachine: (addons-322722) DBG |       <source file='/home/jenkins/minikube-integration/21664-109682/.minikube/machines/addons-322722/boot2docker.iso'/>
	I1017 19:22:22.269724  114312 main.go:141] libmachine: (addons-322722) DBG |       <target dev='hdc' bus='scsi'/>
	I1017 19:22:22.269735  114312 main.go:141] libmachine: (addons-322722) DBG |       <readonly/>
	I1017 19:22:22.269749  114312 main.go:141] libmachine: (addons-322722) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1017 19:22:22.269759  114312 main.go:141] libmachine: (addons-322722) DBG |     </disk>
	I1017 19:22:22.269768  114312 main.go:141] libmachine: (addons-322722) DBG |     <disk type='file' device='disk'>
	I1017 19:22:22.269779  114312 main.go:141] libmachine: (addons-322722) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1017 19:22:22.269796  114312 main.go:141] libmachine: (addons-322722) DBG |       <source file='/home/jenkins/minikube-integration/21664-109682/.minikube/machines/addons-322722/addons-322722.rawdisk'/>
	I1017 19:22:22.269810  114312 main.go:141] libmachine: (addons-322722) DBG |       <target dev='hda' bus='virtio'/>
	I1017 19:22:22.269819  114312 main.go:141] libmachine: (addons-322722) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1017 19:22:22.269832  114312 main.go:141] libmachine: (addons-322722) DBG |     </disk>
	I1017 19:22:22.269844  114312 main.go:141] libmachine: (addons-322722) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1017 19:22:22.269874  114312 main.go:141] libmachine: (addons-322722) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1017 19:22:22.269886  114312 main.go:141] libmachine: (addons-322722) DBG |     </controller>
	I1017 19:22:22.269899  114312 main.go:141] libmachine: (addons-322722) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1017 19:22:22.269918  114312 main.go:141] libmachine: (addons-322722) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1017 19:22:22.269931  114312 main.go:141] libmachine: (addons-322722) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1017 19:22:22.269986  114312 main.go:141] libmachine: (addons-322722) DBG |     </controller>
	I1017 19:22:22.270015  114312 main.go:141] libmachine: (addons-322722) DBG |     <interface type='network'>
	I1017 19:22:22.270023  114312 main.go:141] libmachine: (addons-322722) DBG |       <mac address='52:54:00:20:c0:9a'/>
	I1017 19:22:22.270028  114312 main.go:141] libmachine: (addons-322722) DBG |       <source network='mk-addons-322722'/>
	I1017 19:22:22.270047  114312 main.go:141] libmachine: (addons-322722) DBG |       <model type='virtio'/>
	I1017 19:22:22.270054  114312 main.go:141] libmachine: (addons-322722) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1017 19:22:22.270061  114312 main.go:141] libmachine: (addons-322722) DBG |     </interface>
	I1017 19:22:22.270068  114312 main.go:141] libmachine: (addons-322722) DBG |     <interface type='network'>
	I1017 19:22:22.270074  114312 main.go:141] libmachine: (addons-322722) DBG |       <mac address='52:54:00:8b:27:16'/>
	I1017 19:22:22.270081  114312 main.go:141] libmachine: (addons-322722) DBG |       <source network='default'/>
	I1017 19:22:22.270087  114312 main.go:141] libmachine: (addons-322722) DBG |       <model type='virtio'/>
	I1017 19:22:22.270094  114312 main.go:141] libmachine: (addons-322722) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1017 19:22:22.270102  114312 main.go:141] libmachine: (addons-322722) DBG |     </interface>
	I1017 19:22:22.270106  114312 main.go:141] libmachine: (addons-322722) DBG |     <serial type='pty'>
	I1017 19:22:22.270114  114312 main.go:141] libmachine: (addons-322722) DBG |       <target type='isa-serial' port='0'>
	I1017 19:22:22.270119  114312 main.go:141] libmachine: (addons-322722) DBG |         <model name='isa-serial'/>
	I1017 19:22:22.270124  114312 main.go:141] libmachine: (addons-322722) DBG |       </target>
	I1017 19:22:22.270134  114312 main.go:141] libmachine: (addons-322722) DBG |     </serial>
	I1017 19:22:22.270142  114312 main.go:141] libmachine: (addons-322722) DBG |     <console type='pty'>
	I1017 19:22:22.270150  114312 main.go:141] libmachine: (addons-322722) DBG |       <target type='serial' port='0'/>
	I1017 19:22:22.270181  114312 main.go:141] libmachine: (addons-322722) DBG |     </console>
	I1017 19:22:22.270208  114312 main.go:141] libmachine: (addons-322722) DBG |     <input type='mouse' bus='ps2'/>
	I1017 19:22:22.270219  114312 main.go:141] libmachine: (addons-322722) DBG |     <input type='keyboard' bus='ps2'/>
	I1017 19:22:22.270229  114312 main.go:141] libmachine: (addons-322722) DBG |     <audio id='1' type='none'/>
	I1017 19:22:22.270238  114312 main.go:141] libmachine: (addons-322722) DBG |     <memballoon model='virtio'>
	I1017 19:22:22.270251  114312 main.go:141] libmachine: (addons-322722) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1017 19:22:22.270263  114312 main.go:141] libmachine: (addons-322722) DBG |     </memballoon>
	I1017 19:22:22.270273  114312 main.go:141] libmachine: (addons-322722) DBG |     <rng model='virtio'>
	I1017 19:22:22.270282  114312 main.go:141] libmachine: (addons-322722) DBG |       <backend model='random'>/dev/random</backend>
	I1017 19:22:22.270298  114312 main.go:141] libmachine: (addons-322722) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1017 19:22:22.270309  114312 main.go:141] libmachine: (addons-322722) DBG |     </rng>
	I1017 19:22:22.270315  114312 main.go:141] libmachine: (addons-322722) DBG |   </devices>
	I1017 19:22:22.270326  114312 main.go:141] libmachine: (addons-322722) DBG | </domain>
	I1017 19:22:22.270336  114312 main.go:141] libmachine: (addons-322722) DBG | 
	I1017 19:22:23.593422  114312 main.go:141] libmachine: (addons-322722) waiting for domain to start...
	I1017 19:22:23.595055  114312 main.go:141] libmachine: (addons-322722) domain is now running
	I1017 19:22:23.595085  114312 main.go:141] libmachine: (addons-322722) waiting for IP...
	I1017 19:22:23.595741  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:23.596361  114312 main.go:141] libmachine: (addons-322722) DBG | no network interface addresses found for domain addons-322722 (source=lease)
	I1017 19:22:23.596386  114312 main.go:141] libmachine: (addons-322722) DBG | trying to list again with source=arp
	I1017 19:22:23.596752  114312 main.go:141] libmachine: (addons-322722) DBG | unable to find current IP address of domain addons-322722 in network mk-addons-322722 (interfaces detected: [])
	I1017 19:22:23.596784  114312 main.go:141] libmachine: (addons-322722) DBG | I1017 19:22:23.596725  114341 retry.go:31] will retry after 272.605528ms: waiting for domain to come up
	I1017 19:22:23.871185  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:23.871672  114312 main.go:141] libmachine: (addons-322722) DBG | no network interface addresses found for domain addons-322722 (source=lease)
	I1017 19:22:23.871700  114312 main.go:141] libmachine: (addons-322722) DBG | trying to list again with source=arp
	I1017 19:22:23.872071  114312 main.go:141] libmachine: (addons-322722) DBG | unable to find current IP address of domain addons-322722 in network mk-addons-322722 (interfaces detected: [])
	I1017 19:22:23.872101  114312 main.go:141] libmachine: (addons-322722) DBG | I1017 19:22:23.872039  114341 retry.go:31] will retry after 379.210169ms: waiting for domain to come up
	I1017 19:22:24.252754  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:24.253343  114312 main.go:141] libmachine: (addons-322722) DBG | no network interface addresses found for domain addons-322722 (source=lease)
	I1017 19:22:24.253373  114312 main.go:141] libmachine: (addons-322722) DBG | trying to list again with source=arp
	I1017 19:22:24.253714  114312 main.go:141] libmachine: (addons-322722) DBG | unable to find current IP address of domain addons-322722 in network mk-addons-322722 (interfaces detected: [])
	I1017 19:22:24.253744  114312 main.go:141] libmachine: (addons-322722) DBG | I1017 19:22:24.253662  114341 retry.go:31] will retry after 412.430264ms: waiting for domain to come up
	I1017 19:22:24.667360  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:24.667882  114312 main.go:141] libmachine: (addons-322722) DBG | no network interface addresses found for domain addons-322722 (source=lease)
	I1017 19:22:24.667913  114312 main.go:141] libmachine: (addons-322722) DBG | trying to list again with source=arp
	I1017 19:22:24.668237  114312 main.go:141] libmachine: (addons-322722) DBG | unable to find current IP address of domain addons-322722 in network mk-addons-322722 (interfaces detected: [])
	I1017 19:22:24.668265  114312 main.go:141] libmachine: (addons-322722) DBG | I1017 19:22:24.668184  114341 retry.go:31] will retry after 563.525522ms: waiting for domain to come up
	I1017 19:22:25.233297  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:25.233758  114312 main.go:141] libmachine: (addons-322722) DBG | no network interface addresses found for domain addons-322722 (source=lease)
	I1017 19:22:25.233789  114312 main.go:141] libmachine: (addons-322722) DBG | trying to list again with source=arp
	I1017 19:22:25.234068  114312 main.go:141] libmachine: (addons-322722) DBG | unable to find current IP address of domain addons-322722 in network mk-addons-322722 (interfaces detected: [])
	I1017 19:22:25.234152  114312 main.go:141] libmachine: (addons-322722) DBG | I1017 19:22:25.234071  114341 retry.go:31] will retry after 545.006477ms: waiting for domain to come up
	I1017 19:22:25.780967  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:25.781563  114312 main.go:141] libmachine: (addons-322722) DBG | no network interface addresses found for domain addons-322722 (source=lease)
	I1017 19:22:25.781589  114312 main.go:141] libmachine: (addons-322722) DBG | trying to list again with source=arp
	I1017 19:22:25.781841  114312 main.go:141] libmachine: (addons-322722) DBG | unable to find current IP address of domain addons-322722 in network mk-addons-322722 (interfaces detected: [])
	I1017 19:22:25.781899  114312 main.go:141] libmachine: (addons-322722) DBG | I1017 19:22:25.781823  114341 retry.go:31] will retry after 820.23693ms: waiting for domain to come up
	I1017 19:22:26.603741  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:26.604214  114312 main.go:141] libmachine: (addons-322722) DBG | no network interface addresses found for domain addons-322722 (source=lease)
	I1017 19:22:26.604241  114312 main.go:141] libmachine: (addons-322722) DBG | trying to list again with source=arp
	I1017 19:22:26.604519  114312 main.go:141] libmachine: (addons-322722) DBG | unable to find current IP address of domain addons-322722 in network mk-addons-322722 (interfaces detected: [])
	I1017 19:22:26.604594  114312 main.go:141] libmachine: (addons-322722) DBG | I1017 19:22:26.604512  114341 retry.go:31] will retry after 972.33513ms: waiting for domain to come up
	I1017 19:22:27.578827  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:27.579406  114312 main.go:141] libmachine: (addons-322722) DBG | no network interface addresses found for domain addons-322722 (source=lease)
	I1017 19:22:27.579434  114312 main.go:141] libmachine: (addons-322722) DBG | trying to list again with source=arp
	I1017 19:22:27.579695  114312 main.go:141] libmachine: (addons-322722) DBG | unable to find current IP address of domain addons-322722 in network mk-addons-322722 (interfaces detected: [])
	I1017 19:22:27.579724  114312 main.go:141] libmachine: (addons-322722) DBG | I1017 19:22:27.579670  114341 retry.go:31] will retry after 1.140928032s: waiting for domain to come up
	I1017 19:22:28.722125  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:28.722517  114312 main.go:141] libmachine: (addons-322722) DBG | no network interface addresses found for domain addons-322722 (source=lease)
	I1017 19:22:28.722544  114312 main.go:141] libmachine: (addons-322722) DBG | trying to list again with source=arp
	I1017 19:22:28.722760  114312 main.go:141] libmachine: (addons-322722) DBG | unable to find current IP address of domain addons-322722 in network mk-addons-322722 (interfaces detected: [])
	I1017 19:22:28.722799  114312 main.go:141] libmachine: (addons-322722) DBG | I1017 19:22:28.722734  114341 retry.go:31] will retry after 1.617256297s: waiting for domain to come up
	I1017 19:22:30.342725  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:30.343255  114312 main.go:141] libmachine: (addons-322722) DBG | no network interface addresses found for domain addons-322722 (source=lease)
	I1017 19:22:30.343280  114312 main.go:141] libmachine: (addons-322722) DBG | trying to list again with source=arp
	I1017 19:22:30.343597  114312 main.go:141] libmachine: (addons-322722) DBG | unable to find current IP address of domain addons-322722 in network mk-addons-322722 (interfaces detected: [])
	I1017 19:22:30.343634  114312 main.go:141] libmachine: (addons-322722) DBG | I1017 19:22:30.343541  114341 retry.go:31] will retry after 2.213832331s: waiting for domain to come up
	I1017 19:22:32.560045  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:32.560606  114312 main.go:141] libmachine: (addons-322722) DBG | no network interface addresses found for domain addons-322722 (source=lease)
	I1017 19:22:32.560639  114312 main.go:141] libmachine: (addons-322722) DBG | trying to list again with source=arp
	I1017 19:22:32.560902  114312 main.go:141] libmachine: (addons-322722) DBG | unable to find current IP address of domain addons-322722 in network mk-addons-322722 (interfaces detected: [])
	I1017 19:22:32.560983  114312 main.go:141] libmachine: (addons-322722) DBG | I1017 19:22:32.560915  114341 retry.go:31] will retry after 2.758112123s: waiting for domain to come up
	I1017 19:22:35.322989  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:35.323545  114312 main.go:141] libmachine: (addons-322722) DBG | no network interface addresses found for domain addons-322722 (source=lease)
	I1017 19:22:35.323574  114312 main.go:141] libmachine: (addons-322722) DBG | trying to list again with source=arp
	I1017 19:22:35.323911  114312 main.go:141] libmachine: (addons-322722) DBG | unable to find current IP address of domain addons-322722 in network mk-addons-322722 (interfaces detected: [])
	I1017 19:22:35.323941  114312 main.go:141] libmachine: (addons-322722) DBG | I1017 19:22:35.323830  114341 retry.go:31] will retry after 3.472991573s: waiting for domain to come up
	I1017 19:22:38.800352  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:38.800956  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has current primary IP address 192.168.39.86 and MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:38.800986  114312 main.go:141] libmachine: (addons-322722) found domain IP: 192.168.39.86
	I1017 19:22:38.801001  114312 main.go:141] libmachine: (addons-322722) reserving static IP address...
	I1017 19:22:38.801346  114312 main.go:141] libmachine: (addons-322722) DBG | unable to find host DHCP lease matching {name: "addons-322722", mac: "52:54:00:20:c0:9a", ip: "192.168.39.86"} in network mk-addons-322722
	I1017 19:22:39.015954  114312 main.go:141] libmachine: (addons-322722) DBG | Getting to WaitForSSH function...
	I1017 19:22:39.015982  114312 main.go:141] libmachine: (addons-322722) reserved static IP address 192.168.39.86 for domain addons-322722
	I1017 19:22:39.015994  114312 main.go:141] libmachine: (addons-322722) waiting for SSH...
	I1017 19:22:39.019071  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:39.019542  114312 main.go:141] libmachine: (addons-322722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c0:9a", ip: ""} in network mk-addons-322722: {Iface:virbr1 ExpiryTime:2025-10-17 20:22:36 +0000 UTC Type:0 Mac:52:54:00:20:c0:9a Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:minikube Clientid:01:52:54:00:20:c0:9a}
	I1017 19:22:39.019572  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined IP address 192.168.39.86 and MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:39.019832  114312 main.go:141] libmachine: (addons-322722) DBG | Using SSH client type: external
	I1017 19:22:39.019870  114312 main.go:141] libmachine: (addons-322722) DBG | Using SSH private key: /home/jenkins/minikube-integration/21664-109682/.minikube/machines/addons-322722/id_rsa (-rw-------)
	I1017 19:22:39.019909  114312 main.go:141] libmachine: (addons-322722) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.86 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21664-109682/.minikube/machines/addons-322722/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1017 19:22:39.019928  114312 main.go:141] libmachine: (addons-322722) DBG | About to run SSH command:
	I1017 19:22:39.019937  114312 main.go:141] libmachine: (addons-322722) DBG | exit 0
	I1017 19:22:39.158266  114312 main.go:141] libmachine: (addons-322722) DBG | SSH cmd err, output: <nil>: 
	I1017 19:22:39.158527  114312 main.go:141] libmachine: (addons-322722) domain creation complete
	I1017 19:22:39.158960  114312 main.go:141] libmachine: (addons-322722) Calling .GetConfigRaw
	I1017 19:22:39.159543  114312 main.go:141] libmachine: (addons-322722) Calling .DriverName
	I1017 19:22:39.159796  114312 main.go:141] libmachine: (addons-322722) Calling .DriverName
	I1017 19:22:39.160039  114312 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1017 19:22:39.160055  114312 main.go:141] libmachine: (addons-322722) Calling .GetState
	I1017 19:22:39.161747  114312 main.go:141] libmachine: Detecting operating system of created instance...
	I1017 19:22:39.161762  114312 main.go:141] libmachine: Waiting for SSH to be available...
	I1017 19:22:39.161766  114312 main.go:141] libmachine: Getting to WaitForSSH function...
	I1017 19:22:39.161772  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHHostname
	I1017 19:22:39.164221  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:39.164564  114312 main.go:141] libmachine: (addons-322722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c0:9a", ip: ""} in network mk-addons-322722: {Iface:virbr1 ExpiryTime:2025-10-17 20:22:36 +0000 UTC Type:0 Mac:52:54:00:20:c0:9a Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-322722 Clientid:01:52:54:00:20:c0:9a}
	I1017 19:22:39.164586  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined IP address 192.168.39.86 and MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:39.164705  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHPort
	I1017 19:22:39.164889  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHKeyPath
	I1017 19:22:39.165060  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHKeyPath
	I1017 19:22:39.165199  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHUsername
	I1017 19:22:39.165348  114312 main.go:141] libmachine: Using SSH client type: native
	I1017 19:22:39.165708  114312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I1017 19:22:39.165725  114312 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1017 19:22:39.270594  114312 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:22:39.270638  114312 main.go:141] libmachine: Detecting the provisioner...
	I1017 19:22:39.270647  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHHostname
	I1017 19:22:39.273954  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:39.274440  114312 main.go:141] libmachine: (addons-322722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c0:9a", ip: ""} in network mk-addons-322722: {Iface:virbr1 ExpiryTime:2025-10-17 20:22:36 +0000 UTC Type:0 Mac:52:54:00:20:c0:9a Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-322722 Clientid:01:52:54:00:20:c0:9a}
	I1017 19:22:39.274472  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined IP address 192.168.39.86 and MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:39.274572  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHPort
	I1017 19:22:39.274761  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHKeyPath
	I1017 19:22:39.274980  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHKeyPath
	I1017 19:22:39.275178  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHUsername
	I1017 19:22:39.275361  114312 main.go:141] libmachine: Using SSH client type: native
	I1017 19:22:39.275563  114312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I1017 19:22:39.275574  114312 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1017 19:22:39.381208  114312 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1017 19:22:39.381303  114312 main.go:141] libmachine: found compatible host: buildroot
	I1017 19:22:39.381316  114312 main.go:141] libmachine: Provisioning with buildroot...
	I1017 19:22:39.381335  114312 main.go:141] libmachine: (addons-322722) Calling .GetMachineName
	I1017 19:22:39.381606  114312 buildroot.go:166] provisioning hostname "addons-322722"
	I1017 19:22:39.381640  114312 main.go:141] libmachine: (addons-322722) Calling .GetMachineName
	I1017 19:22:39.381868  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHHostname
	I1017 19:22:39.384887  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:39.385382  114312 main.go:141] libmachine: (addons-322722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c0:9a", ip: ""} in network mk-addons-322722: {Iface:virbr1 ExpiryTime:2025-10-17 20:22:36 +0000 UTC Type:0 Mac:52:54:00:20:c0:9a Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-322722 Clientid:01:52:54:00:20:c0:9a}
	I1017 19:22:39.385406  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined IP address 192.168.39.86 and MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:39.385685  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHPort
	I1017 19:22:39.385927  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHKeyPath
	I1017 19:22:39.386077  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHKeyPath
	I1017 19:22:39.386179  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHUsername
	I1017 19:22:39.386339  114312 main.go:141] libmachine: Using SSH client type: native
	I1017 19:22:39.386589  114312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I1017 19:22:39.386605  114312 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-322722 && echo "addons-322722" | sudo tee /etc/hostname
	I1017 19:22:39.508353  114312 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-322722
	
	I1017 19:22:39.508388  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHHostname
	I1017 19:22:39.511507  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:39.511922  114312 main.go:141] libmachine: (addons-322722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c0:9a", ip: ""} in network mk-addons-322722: {Iface:virbr1 ExpiryTime:2025-10-17 20:22:36 +0000 UTC Type:0 Mac:52:54:00:20:c0:9a Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-322722 Clientid:01:52:54:00:20:c0:9a}
	I1017 19:22:39.511946  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined IP address 192.168.39.86 and MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:39.512210  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHPort
	I1017 19:22:39.512396  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHKeyPath
	I1017 19:22:39.512538  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHKeyPath
	I1017 19:22:39.512665  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHUsername
	I1017 19:22:39.512829  114312 main.go:141] libmachine: Using SSH client type: native
	I1017 19:22:39.513131  114312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I1017 19:22:39.513159  114312 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-322722' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-322722/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-322722' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:22:39.630315  114312 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:22:39.630345  114312 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21664-109682/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-109682/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-109682/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-109682/.minikube}
	I1017 19:22:39.630399  114312 buildroot.go:174] setting up certificates
	I1017 19:22:39.630412  114312 provision.go:84] configureAuth start
	I1017 19:22:39.630425  114312 main.go:141] libmachine: (addons-322722) Calling .GetMachineName
	I1017 19:22:39.630693  114312 main.go:141] libmachine: (addons-322722) Calling .GetIP
	I1017 19:22:39.633873  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:39.634286  114312 main.go:141] libmachine: (addons-322722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c0:9a", ip: ""} in network mk-addons-322722: {Iface:virbr1 ExpiryTime:2025-10-17 20:22:36 +0000 UTC Type:0 Mac:52:54:00:20:c0:9a Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-322722 Clientid:01:52:54:00:20:c0:9a}
	I1017 19:22:39.634308  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined IP address 192.168.39.86 and MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:39.634485  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHHostname
	I1017 19:22:39.636576  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:39.636981  114312 main.go:141] libmachine: (addons-322722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c0:9a", ip: ""} in network mk-addons-322722: {Iface:virbr1 ExpiryTime:2025-10-17 20:22:36 +0000 UTC Type:0 Mac:52:54:00:20:c0:9a Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-322722 Clientid:01:52:54:00:20:c0:9a}
	I1017 19:22:39.637014  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined IP address 192.168.39.86 and MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:39.637184  114312 provision.go:143] copyHostCerts
	I1017 19:22:39.637249  114312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-109682/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-109682/.minikube/key.pem (1675 bytes)
	I1017 19:22:39.637368  114312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-109682/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-109682/.minikube/ca.pem (1082 bytes)
	I1017 19:22:39.637428  114312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-109682/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-109682/.minikube/cert.pem (1123 bytes)
	I1017 19:22:39.637485  114312 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-109682/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-109682/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-109682/.minikube/certs/ca-key.pem org=jenkins.addons-322722 san=[127.0.0.1 192.168.39.86 addons-322722 localhost minikube]
	I1017 19:22:39.803859  114312 provision.go:177] copyRemoteCerts
	I1017 19:22:39.803918  114312 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:22:39.803945  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHHostname
	I1017 19:22:39.806758  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:39.807287  114312 main.go:141] libmachine: (addons-322722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c0:9a", ip: ""} in network mk-addons-322722: {Iface:virbr1 ExpiryTime:2025-10-17 20:22:36 +0000 UTC Type:0 Mac:52:54:00:20:c0:9a Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-322722 Clientid:01:52:54:00:20:c0:9a}
	I1017 19:22:39.807319  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined IP address 192.168.39.86 and MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:39.807617  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHPort
	I1017 19:22:39.807868  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHKeyPath
	I1017 19:22:39.808090  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHUsername
	I1017 19:22:39.808307  114312 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/addons-322722/id_rsa Username:docker}
	I1017 19:22:39.891512  114312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-109682/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 19:22:39.920714  114312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-109682/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1017 19:22:39.953701  114312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-109682/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 19:22:39.984671  114312 provision.go:87] duration metric: took 354.241971ms to configureAuth
	I1017 19:22:39.984706  114312 buildroot.go:189] setting minikube options for container-runtime
	I1017 19:22:39.984943  114312 config.go:182] Loaded profile config "addons-322722": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:22:39.985039  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHHostname
	I1017 19:22:39.988779  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:39.989135  114312 main.go:141] libmachine: (addons-322722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c0:9a", ip: ""} in network mk-addons-322722: {Iface:virbr1 ExpiryTime:2025-10-17 20:22:36 +0000 UTC Type:0 Mac:52:54:00:20:c0:9a Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-322722 Clientid:01:52:54:00:20:c0:9a}
	I1017 19:22:39.989170  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined IP address 192.168.39.86 and MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:39.989411  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHPort
	I1017 19:22:39.989601  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHKeyPath
	I1017 19:22:39.989758  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHKeyPath
	I1017 19:22:39.989874  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHUsername
	I1017 19:22:39.990014  114312 main.go:141] libmachine: Using SSH client type: native
	I1017 19:22:39.990273  114312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I1017 19:22:39.990292  114312 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:22:40.223828  114312 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:22:40.223876  114312 main.go:141] libmachine: Checking connection to Docker...
	I1017 19:22:40.223887  114312 main.go:141] libmachine: (addons-322722) Calling .GetURL
	I1017 19:22:40.225359  114312 main.go:141] libmachine: (addons-322722) DBG | using libvirt version 8000000
	I1017 19:22:40.228095  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:40.228395  114312 main.go:141] libmachine: (addons-322722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c0:9a", ip: ""} in network mk-addons-322722: {Iface:virbr1 ExpiryTime:2025-10-17 20:22:36 +0000 UTC Type:0 Mac:52:54:00:20:c0:9a Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-322722 Clientid:01:52:54:00:20:c0:9a}
	I1017 19:22:40.228420  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined IP address 192.168.39.86 and MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:40.228591  114312 main.go:141] libmachine: Docker is up and running!
	I1017 19:22:40.228604  114312 main.go:141] libmachine: Reticulating splines...
	I1017 19:22:40.228613  114312 client.go:171] duration metric: took 19.195235009s to LocalClient.Create
	I1017 19:22:40.228642  114312 start.go:167] duration metric: took 19.195316111s to libmachine.API.Create "addons-322722"
	I1017 19:22:40.228656  114312 start.go:293] postStartSetup for "addons-322722" (driver="kvm2")
	I1017 19:22:40.228669  114312 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:22:40.228693  114312 main.go:141] libmachine: (addons-322722) Calling .DriverName
	I1017 19:22:40.228968  114312 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:22:40.228995  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHHostname
	I1017 19:22:40.231169  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:40.231519  114312 main.go:141] libmachine: (addons-322722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c0:9a", ip: ""} in network mk-addons-322722: {Iface:virbr1 ExpiryTime:2025-10-17 20:22:36 +0000 UTC Type:0 Mac:52:54:00:20:c0:9a Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-322722 Clientid:01:52:54:00:20:c0:9a}
	I1017 19:22:40.231542  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined IP address 192.168.39.86 and MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:40.231659  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHPort
	I1017 19:22:40.231896  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHKeyPath
	I1017 19:22:40.232088  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHUsername
	I1017 19:22:40.232248  114312 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/addons-322722/id_rsa Username:docker}
	I1017 19:22:40.314798  114312 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:22:40.319309  114312 info.go:137] Remote host: Buildroot 2025.02
	I1017 19:22:40.319335  114312 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-109682/.minikube/addons for local assets ...
	I1017 19:22:40.319404  114312 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-109682/.minikube/files for local assets ...
	I1017 19:22:40.319427  114312 start.go:296] duration metric: took 90.764719ms for postStartSetup
	I1017 19:22:40.319462  114312 main.go:141] libmachine: (addons-322722) Calling .GetConfigRaw
	I1017 19:22:40.320084  114312 main.go:141] libmachine: (addons-322722) Calling .GetIP
	I1017 19:22:40.323164  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:40.323580  114312 main.go:141] libmachine: (addons-322722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c0:9a", ip: ""} in network mk-addons-322722: {Iface:virbr1 ExpiryTime:2025-10-17 20:22:36 +0000 UTC Type:0 Mac:52:54:00:20:c0:9a Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-322722 Clientid:01:52:54:00:20:c0:9a}
	I1017 19:22:40.323609  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined IP address 192.168.39.86 and MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:40.323911  114312 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/config.json ...
	I1017 19:22:40.324098  114312 start.go:128] duration metric: took 19.30730732s to createHost
	I1017 19:22:40.324124  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHHostname
	I1017 19:22:40.326662  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:40.327019  114312 main.go:141] libmachine: (addons-322722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c0:9a", ip: ""} in network mk-addons-322722: {Iface:virbr1 ExpiryTime:2025-10-17 20:22:36 +0000 UTC Type:0 Mac:52:54:00:20:c0:9a Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-322722 Clientid:01:52:54:00:20:c0:9a}
	I1017 19:22:40.327047  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined IP address 192.168.39.86 and MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:40.327200  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHPort
	I1017 19:22:40.327407  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHKeyPath
	I1017 19:22:40.327589  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHKeyPath
	I1017 19:22:40.327735  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHUsername
	I1017 19:22:40.327963  114312 main.go:141] libmachine: Using SSH client type: native
	I1017 19:22:40.328230  114312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I1017 19:22:40.328244  114312 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1017 19:22:40.433524  114312 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760728960.407048799
	
	I1017 19:22:40.433545  114312 fix.go:216] guest clock: 1760728960.407048799
	I1017 19:22:40.433553  114312 fix.go:229] Guest: 2025-10-17 19:22:40.407048799 +0000 UTC Remote: 2025-10-17 19:22:40.324112956 +0000 UTC m=+19.428674501 (delta=82.935843ms)
	I1017 19:22:40.433573  114312 fix.go:200] guest clock delta is within tolerance: 82.935843ms
	I1017 19:22:40.433579  114312 start.go:83] releasing machines lock for "addons-322722", held for 19.416853317s
	I1017 19:22:40.433602  114312 main.go:141] libmachine: (addons-322722) Calling .DriverName
	I1017 19:22:40.433887  114312 main.go:141] libmachine: (addons-322722) Calling .GetIP
	I1017 19:22:40.437299  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:40.437753  114312 main.go:141] libmachine: (addons-322722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c0:9a", ip: ""} in network mk-addons-322722: {Iface:virbr1 ExpiryTime:2025-10-17 20:22:36 +0000 UTC Type:0 Mac:52:54:00:20:c0:9a Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-322722 Clientid:01:52:54:00:20:c0:9a}
	I1017 19:22:40.437778  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined IP address 192.168.39.86 and MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:40.437976  114312 main.go:141] libmachine: (addons-322722) Calling .DriverName
	I1017 19:22:40.438471  114312 main.go:141] libmachine: (addons-322722) Calling .DriverName
	I1017 19:22:40.438694  114312 main.go:141] libmachine: (addons-322722) Calling .DriverName
	I1017 19:22:40.438804  114312 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:22:40.438865  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHHostname
	I1017 19:22:40.438972  114312 ssh_runner.go:195] Run: cat /version.json
	I1017 19:22:40.439003  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHHostname
	I1017 19:22:40.441900  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:40.442271  114312 main.go:141] libmachine: (addons-322722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c0:9a", ip: ""} in network mk-addons-322722: {Iface:virbr1 ExpiryTime:2025-10-17 20:22:36 +0000 UTC Type:0 Mac:52:54:00:20:c0:9a Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-322722 Clientid:01:52:54:00:20:c0:9a}
	I1017 19:22:40.442303  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined IP address 192.168.39.86 and MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:40.442322  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:40.442470  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHPort
	I1017 19:22:40.442671  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHKeyPath
	I1017 19:22:40.442818  114312 main.go:141] libmachine: (addons-322722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c0:9a", ip: ""} in network mk-addons-322722: {Iface:virbr1 ExpiryTime:2025-10-17 20:22:36 +0000 UTC Type:0 Mac:52:54:00:20:c0:9a Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-322722 Clientid:01:52:54:00:20:c0:9a}
	I1017 19:22:40.442830  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHUsername
	I1017 19:22:40.442839  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined IP address 192.168.39.86 and MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:40.443096  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHPort
	I1017 19:22:40.443133  114312 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/addons-322722/id_rsa Username:docker}
	I1017 19:22:40.443278  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHKeyPath
	I1017 19:22:40.443450  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHUsername
	I1017 19:22:40.443607  114312 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/addons-322722/id_rsa Username:docker}
	I1017 19:22:40.550260  114312 ssh_runner.go:195] Run: systemctl --version
	I1017 19:22:40.557074  114312 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:22:40.714065  114312 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 19:22:40.721430  114312 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:22:40.721525  114312 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:22:40.740608  114312 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1017 19:22:40.740642  114312 start.go:495] detecting cgroup driver to use...
	I1017 19:22:40.740715  114312 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:22:40.759960  114312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:22:40.777791  114312 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:22:40.777880  114312 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:22:40.794716  114312 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:22:40.811216  114312 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:22:40.951883  114312 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:22:41.162746  114312 docker.go:234] disabling docker service ...
	I1017 19:22:41.162817  114312 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:22:41.179120  114312 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:22:41.194055  114312 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:22:41.349304  114312 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:22:41.493131  114312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:22:41.509671  114312 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:22:41.533369  114312 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:22:41.533449  114312 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:22:41.545946  114312 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 19:22:41.546026  114312 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:22:41.559035  114312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:22:41.571771  114312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:22:41.584235  114312 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:22:41.598075  114312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:22:41.611295  114312 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:22:41.633952  114312 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:22:41.647282  114312 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:22:41.659604  114312 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1017 19:22:41.659682  114312 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1017 19:22:41.679640  114312 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:22:41.691784  114312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:22:41.834929  114312 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:22:42.266783  114312 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:22:42.266901  114312 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:22:42.272387  114312 start.go:563] Will wait 60s for crictl version
	I1017 19:22:42.272466  114312 ssh_runner.go:195] Run: which crictl
	I1017 19:22:42.276557  114312 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1017 19:22:42.316634  114312 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1017 19:22:42.316747  114312 ssh_runner.go:195] Run: crio --version
	I1017 19:22:42.346935  114312 ssh_runner.go:195] Run: crio --version
	I1017 19:22:42.378359  114312 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1017 19:22:42.379597  114312 main.go:141] libmachine: (addons-322722) Calling .GetIP
	I1017 19:22:42.382822  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:42.383259  114312 main.go:141] libmachine: (addons-322722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c0:9a", ip: ""} in network mk-addons-322722: {Iface:virbr1 ExpiryTime:2025-10-17 20:22:36 +0000 UTC Type:0 Mac:52:54:00:20:c0:9a Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-322722 Clientid:01:52:54:00:20:c0:9a}
	I1017 19:22:42.383291  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined IP address 192.168.39.86 and MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:22:42.383536  114312 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1017 19:22:42.388419  114312 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:22:42.404284  114312 kubeadm.go:883] updating cluster {Name:addons-322722 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-322722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.86 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 19:22:42.404394  114312 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:22:42.404436  114312 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:22:42.440982  114312 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1017 19:22:42.441081  114312 ssh_runner.go:195] Run: which lz4
	I1017 19:22:42.445561  114312 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1017 19:22:42.450587  114312 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1017 19:22:42.450624  114312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-109682/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1017 19:22:43.797420  114312 crio.go:462] duration metric: took 1.351894013s to copy over tarball
	I1017 19:22:43.797493  114312 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1017 19:22:45.489827  114312 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.692302498s)
	I1017 19:22:45.489885  114312 crio.go:469] duration metric: took 1.692434626s to extract the tarball
	I1017 19:22:45.489906  114312 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1017 19:22:45.532190  114312 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:22:45.586577  114312 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:22:45.586605  114312 cache_images.go:85] Images are preloaded, skipping loading
	I1017 19:22:45.586616  114312 kubeadm.go:934] updating node { 192.168.39.86 8443 v1.34.1 crio true true} ...
	I1017 19:22:45.586731  114312 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-322722 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.86
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-322722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:22:45.586821  114312 ssh_runner.go:195] Run: crio config
	I1017 19:22:45.640956  114312 cni.go:84] Creating CNI manager for ""
	I1017 19:22:45.640981  114312 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1017 19:22:45.641003  114312 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 19:22:45.641035  114312 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.86 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-322722 NodeName:addons-322722 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.86"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.86 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 19:22:45.641192  114312 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.86
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-322722"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.86"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.86"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 19:22:45.641266  114312 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:22:45.656253  114312 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:22:45.656344  114312 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 19:22:45.671163  114312 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1017 19:22:45.695153  114312 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:22:45.718501  114312 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1017 19:22:45.741806  114312 ssh_runner.go:195] Run: grep 192.168.39.86	control-plane.minikube.internal$ /etc/hosts
	I1017 19:22:45.746158  114312 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.86	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:22:45.763646  114312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:22:45.912540  114312 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:22:45.943947  114312 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722 for IP: 192.168.39.86
	I1017 19:22:45.943973  114312 certs.go:195] generating shared ca certs ...
	I1017 19:22:45.943995  114312 certs.go:227] acquiring lock for ca certs: {Name:mk1628109f16dfe58c75b776fa21265e79b64c50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:22:45.944274  114312 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-109682/.minikube/ca.key
	I1017 19:22:46.320518  114312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-109682/.minikube/ca.crt ...
	I1017 19:22:46.320551  114312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-109682/.minikube/ca.crt: {Name:mka9b3b38f3351cd243ea8de45cb7a159e4ddebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:22:46.320770  114312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-109682/.minikube/ca.key ...
	I1017 19:22:46.320791  114312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-109682/.minikube/ca.key: {Name:mk8ef3a43aab59e809ce6d0c5a677c8430c78b59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:22:46.320915  114312 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-109682/.minikube/proxy-client-ca.key
	I1017 19:22:46.484076  114312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-109682/.minikube/proxy-client-ca.crt ...
	I1017 19:22:46.484110  114312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-109682/.minikube/proxy-client-ca.crt: {Name:mk2c10fc5f22f4760e5b88038f2fe03e3d876b5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:22:46.484318  114312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-109682/.minikube/proxy-client-ca.key ...
	I1017 19:22:46.484339  114312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-109682/.minikube/proxy-client-ca.key: {Name:mk9ebbbc5c6bd15e814a591ae6417e423a488648 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:22:46.484452  114312 certs.go:257] generating profile certs ...
	I1017 19:22:46.484515  114312 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/client.key
	I1017 19:22:46.484541  114312 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/client.crt with IP's: []
	I1017 19:22:46.740272  114312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/client.crt ...
	I1017 19:22:46.740307  114312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/client.crt: {Name:mk7647c6afb2ee8ba9c688241ebbd965aac1c5bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:22:46.740517  114312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/client.key ...
	I1017 19:22:46.740535  114312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/client.key: {Name:mk8f186a208018c29e8b5373baafa8e4f6db87ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:22:46.740661  114312 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/apiserver.key.b8e8f39a
	I1017 19:22:46.740682  114312 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/apiserver.crt.b8e8f39a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.86]
	I1017 19:22:46.940167  114312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/apiserver.crt.b8e8f39a ...
	I1017 19:22:46.940199  114312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/apiserver.crt.b8e8f39a: {Name:mk06d801f105e7771c3372c6405347b12bd65d9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:22:46.940413  114312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/apiserver.key.b8e8f39a ...
	I1017 19:22:46.940433  114312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/apiserver.key.b8e8f39a: {Name:mk38881bc9d0eedb4ea15339900b89c060ad869b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:22:46.940543  114312 certs.go:382] copying /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/apiserver.crt.b8e8f39a -> /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/apiserver.crt
	I1017 19:22:46.940652  114312 certs.go:386] copying /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/apiserver.key.b8e8f39a -> /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/apiserver.key
	I1017 19:22:46.940716  114312 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/proxy-client.key
	I1017 19:22:46.940735  114312 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/proxy-client.crt with IP's: []
	I1017 19:22:47.258099  114312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/proxy-client.crt ...
	I1017 19:22:47.258129  114312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/proxy-client.crt: {Name:mk2d325448ff26f37af984af4fe0a9f61e07dcbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:22:47.258326  114312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/proxy-client.key ...
	I1017 19:22:47.258346  114312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/proxy-client.key: {Name:mk1387c83d84bc8fee6893fd05dbf650c3e4d5c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:22:47.258552  114312 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-109682/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 19:22:47.258592  114312 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-109682/.minikube/certs/ca.pem (1082 bytes)
	I1017 19:22:47.258617  114312 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-109682/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:22:47.258641  114312 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-109682/.minikube/certs/key.pem (1675 bytes)
	I1017 19:22:47.259214  114312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-109682/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:22:47.291075  114312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-109682/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 19:22:47.323040  114312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-109682/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:22:47.354712  114312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-109682/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 19:22:47.386293  114312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1017 19:22:47.416400  114312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 19:22:47.446829  114312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 19:22:47.478307  114312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 19:22:47.510567  114312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-109682/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:22:47.546323  114312 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 19:22:47.569041  114312 ssh_runner.go:195] Run: openssl version
	I1017 19:22:47.577549  114312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:22:47.591592  114312 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:22:47.597557  114312 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:22 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:22:47.597631  114312 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:22:47.606660  114312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:22:47.621230  114312 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:22:47.626222  114312 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 19:22:47.626293  114312 kubeadm.go:400] StartCluster: {Name:addons-322722 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-322722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.86 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:22:47.626381  114312 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:22:47.626431  114312 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:22:47.665510  114312 cri.go:89] found id: ""
	I1017 19:22:47.665605  114312 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 19:22:47.678425  114312 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1017 19:22:47.691167  114312 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1017 19:22:47.703939  114312 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1017 19:22:47.703963  114312 kubeadm.go:157] found existing configuration files:
	
	I1017 19:22:47.704020  114312 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1017 19:22:47.715355  114312 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1017 19:22:47.715415  114312 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1017 19:22:47.727517  114312 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1017 19:22:47.738745  114312 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1017 19:22:47.738822  114312 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1017 19:22:47.751032  114312 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1017 19:22:47.762668  114312 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1017 19:22:47.762744  114312 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1017 19:22:47.774911  114312 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1017 19:22:47.786553  114312 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1017 19:22:47.786630  114312 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1017 19:22:47.799016  114312 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1017 19:22:47.853118  114312 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1017 19:22:47.853247  114312 kubeadm.go:318] [preflight] Running pre-flight checks
	I1017 19:22:47.953368  114312 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1017 19:22:47.953470  114312 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1017 19:22:47.953571  114312 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1017 19:22:47.967598  114312 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1017 19:22:48.089826  114312 out.go:252]   - Generating certificates and keys ...
	I1017 19:22:48.089989  114312 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1017 19:22:48.090125  114312 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1017 19:22:48.393508  114312 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1017 19:22:48.666760  114312 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1017 19:22:49.145525  114312 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1017 19:22:50.504619  114312 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1017 19:22:51.112997  114312 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1017 19:22:51.113122  114312 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-322722 localhost] and IPs [192.168.39.86 127.0.0.1 ::1]
	I1017 19:22:51.330652  114312 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1017 19:22:51.330965  114312 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-322722 localhost] and IPs [192.168.39.86 127.0.0.1 ::1]
	I1017 19:22:51.715222  114312 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1017 19:22:51.752822  114312 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1017 19:22:51.892529  114312 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1017 19:22:51.892605  114312 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1017 19:22:52.120275  114312 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1017 19:22:52.484158  114312 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1017 19:22:52.588971  114312 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1017 19:22:52.758802  114312 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1017 19:22:52.896474  114312 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1017 19:22:52.897317  114312 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1017 19:22:52.899376  114312 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1017 19:22:53.000238  114312 out.go:252]   - Booting up control plane ...
	I1017 19:22:53.000411  114312 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1017 19:22:53.000550  114312 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1017 19:22:53.000645  114312 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1017 19:22:53.000809  114312 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1017 19:22:53.000988  114312 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1017 19:22:53.001160  114312 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1017 19:22:53.001284  114312 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1017 19:22:53.001369  114312 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1017 19:22:53.091924  114312 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1017 19:22:53.092096  114312 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1017 19:22:54.092638  114312 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001429284s
	I1017 19:22:54.095340  114312 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1017 19:22:54.095460  114312 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.86:8443/livez
	I1017 19:22:54.095588  114312 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1017 19:22:54.095741  114312 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1017 19:22:56.549005  114312 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.454652466s
	I1017 19:22:58.356736  114312 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.263434393s
	I1017 19:23:00.096495  114312 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.003763383s
	I1017 19:23:00.116153  114312 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1017 19:23:00.133708  114312 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1017 19:23:00.150752  114312 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1017 19:23:00.151010  114312 kubeadm.go:318] [mark-control-plane] Marking the node addons-322722 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1017 19:23:00.165667  114312 kubeadm.go:318] [bootstrap-token] Using token: cv1mo2.2pimdh7qxqyzamk2
	I1017 19:23:00.168139  114312 out.go:252]   - Configuring RBAC rules ...
	I1017 19:23:00.168300  114312 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1017 19:23:00.172151  114312 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1017 19:23:00.181402  114312 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1017 19:23:00.185126  114312 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1017 19:23:00.187735  114312 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1017 19:23:00.190882  114312 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1017 19:23:00.506400  114312 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1017 19:23:00.951895  114312 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1017 19:23:01.509877  114312 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1017 19:23:01.511290  114312 kubeadm.go:318] 
	I1017 19:23:01.511356  114312 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1017 19:23:01.511361  114312 kubeadm.go:318] 
	I1017 19:23:01.511437  114312 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1017 19:23:01.511449  114312 kubeadm.go:318] 
	I1017 19:23:01.511480  114312 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1017 19:23:01.511549  114312 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1017 19:23:01.511612  114312 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1017 19:23:01.511621  114312 kubeadm.go:318] 
	I1017 19:23:01.511681  114312 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1017 19:23:01.511688  114312 kubeadm.go:318] 
	I1017 19:23:01.511751  114312 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1017 19:23:01.511761  114312 kubeadm.go:318] 
	I1017 19:23:01.511831  114312 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1017 19:23:01.511951  114312 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1017 19:23:01.512048  114312 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1017 19:23:01.512057  114312 kubeadm.go:318] 
	I1017 19:23:01.512169  114312 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1017 19:23:01.512285  114312 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1017 19:23:01.512296  114312 kubeadm.go:318] 
	I1017 19:23:01.512437  114312 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token cv1mo2.2pimdh7qxqyzamk2 \
	I1017 19:23:01.512593  114312 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:5b0dc0a980b2daad850808fb1f5f63ad9e11814e933371072ba8b7b1dcd6f2aa \
	I1017 19:23:01.512627  114312 kubeadm.go:318] 	--control-plane 
	I1017 19:23:01.512651  114312 kubeadm.go:318] 
	I1017 19:23:01.512762  114312 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1017 19:23:01.512778  114312 kubeadm.go:318] 
	I1017 19:23:01.512943  114312 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token cv1mo2.2pimdh7qxqyzamk2 \
	I1017 19:23:01.513067  114312 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:5b0dc0a980b2daad850808fb1f5f63ad9e11814e933371072ba8b7b1dcd6f2aa 
	I1017 19:23:01.514654  114312 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1017 19:23:01.514702  114312 cni.go:84] Creating CNI manager for ""
	I1017 19:23:01.514721  114312 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1017 19:23:01.516396  114312 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1017 19:23:01.517797  114312 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1017 19:23:01.530924  114312 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1017 19:23:01.554236  114312 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1017 19:23:01.554327  114312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:23:01.554362  114312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-322722 minikube.k8s.io/updated_at=2025_10_17T19_23_01_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0 minikube.k8s.io/name=addons-322722 minikube.k8s.io/primary=true
	I1017 19:23:01.722696  114312 ops.go:34] apiserver oom_adj: -16
	I1017 19:23:01.722826  114312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:23:02.223964  114312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:23:02.723197  114312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:23:03.223896  114312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:23:03.723893  114312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:23:04.223238  114312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:23:04.723582  114312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:23:05.223520  114312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:23:05.302519  114312 kubeadm.go:1113] duration metric: took 3.748262622s to wait for elevateKubeSystemPrivileges
	I1017 19:23:05.302554  114312 kubeadm.go:402] duration metric: took 17.676272153s to StartCluster
	I1017 19:23:05.302579  114312 settings.go:142] acquiring lock: {Name:mkb7b59ea598dca0a5adfe4320f5bbb3feb2252c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:23:05.302707  114312 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-109682/kubeconfig
	I1017 19:23:05.303087  114312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-109682/kubeconfig: {Name:mk80b2133650ff16478c714743c00aa30ac700c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:23:05.303290  114312 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1017 19:23:05.303297  114312 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.86 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:23:05.303352  114312 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1017 19:23:05.303474  114312 addons.go:69] Setting yakd=true in profile "addons-322722"
	I1017 19:23:05.303489  114312 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-322722"
	I1017 19:23:05.303487  114312 addons.go:69] Setting default-storageclass=true in profile "addons-322722"
	I1017 19:23:05.303504  114312 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-322722"
	I1017 19:23:05.303509  114312 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-322722"
	I1017 19:23:05.303509  114312 addons.go:69] Setting storage-provisioner=true in profile "addons-322722"
	I1017 19:23:05.303527  114312 addons.go:69] Setting volumesnapshots=true in profile "addons-322722"
	I1017 19:23:05.303535  114312 addons.go:238] Setting addon storage-provisioner=true in "addons-322722"
	I1017 19:23:05.303534  114312 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-322722"
	I1017 19:23:05.303538  114312 addons.go:238] Setting addon volumesnapshots=true in "addons-322722"
	I1017 19:23:05.303533  114312 config.go:182] Loaded profile config "addons-322722": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:23:05.303558  114312 addons.go:69] Setting ingress=true in profile "addons-322722"
	I1017 19:23:05.303552  114312 addons.go:69] Setting registry=true in profile "addons-322722"
	I1017 19:23:05.303567  114312 host.go:66] Checking if "addons-322722" exists ...
	I1017 19:23:05.303567  114312 host.go:66] Checking if "addons-322722" exists ...
	I1017 19:23:05.303576  114312 addons.go:238] Setting addon registry=true in "addons-322722"
	I1017 19:23:05.303578  114312 addons.go:238] Setting addon ingress=true in "addons-322722"
	I1017 19:23:05.303546  114312 host.go:66] Checking if "addons-322722" exists ...
	I1017 19:23:05.303602  114312 host.go:66] Checking if "addons-322722" exists ...
	I1017 19:23:05.303604  114312 host.go:66] Checking if "addons-322722" exists ...
	I1017 19:23:05.303618  114312 addons.go:69] Setting cloud-spanner=true in profile "addons-322722"
	I1017 19:23:05.303642  114312 addons.go:238] Setting addon cloud-spanner=true in "addons-322722"
	I1017 19:23:05.303671  114312 host.go:66] Checking if "addons-322722" exists ...
	I1017 19:23:05.303870  114312 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-322722"
	I1017 19:23:05.303890  114312 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-322722"
	I1017 19:23:05.303914  114312 host.go:66] Checking if "addons-322722" exists ...
	I1017 19:23:05.304047  114312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:23:05.304047  114312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:23:05.303523  114312 addons.go:69] Setting volcano=true in profile "addons-322722"
	I1017 19:23:05.304067  114312 addons.go:69] Setting registry-creds=true in profile "addons-322722"
	I1017 19:23:05.304065  114312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:23:05.304080  114312 addons.go:238] Setting addon registry-creds=true in "addons-322722"
	I1017 19:23:05.304081  114312 addons.go:69] Setting metrics-server=true in profile "addons-322722"
	I1017 19:23:05.304086  114312 addons.go:69] Setting ingress-dns=true in profile "addons-322722"
	I1017 19:23:05.304091  114312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:23:05.304092  114312 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-322722"
	I1017 19:23:05.303517  114312 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-322722"
	I1017 19:23:05.304102  114312 addons.go:238] Setting addon ingress-dns=true in "addons-322722"
	I1017 19:23:05.303499  114312 addons.go:238] Setting addon yakd=true in "addons-322722"
	I1017 19:23:05.304108  114312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:23:05.304120  114312 host.go:66] Checking if "addons-322722" exists ...
	I1017 19:23:05.304125  114312 host.go:66] Checking if "addons-322722" exists ...
	I1017 19:23:05.304130  114312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:23:05.304132  114312 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-322722"
	I1017 19:23:05.304083  114312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:23:05.304165  114312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:23:05.304074  114312 addons.go:238] Setting addon volcano=true in "addons-322722"
	I1017 19:23:05.304054  114312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:23:05.304207  114312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:23:05.304075  114312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:23:05.304096  114312 addons.go:238] Setting addon metrics-server=true in "addons-322722"
	I1017 19:23:05.303552  114312 addons.go:69] Setting gcp-auth=true in profile "addons-322722"
	I1017 19:23:05.304292  114312 mustload.go:65] Loading cluster: addons-322722
	I1017 19:23:05.304051  114312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:23:05.304080  114312 addons.go:69] Setting inspektor-gadget=true in profile "addons-322722"
	I1017 19:23:05.304325  114312 addons.go:238] Setting addon inspektor-gadget=true in "addons-322722"
	I1017 19:23:05.304366  114312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:23:05.304102  114312 host.go:66] Checking if "addons-322722" exists ...
	I1017 19:23:05.304430  114312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:23:05.304095  114312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:23:05.304449  114312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:23:05.304460  114312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:23:05.304478  114312 host.go:66] Checking if "addons-322722" exists ...
	I1017 19:23:05.304460  114312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:23:05.304488  114312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:23:05.304498  114312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:23:05.304499  114312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:23:05.304523  114312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:23:05.304544  114312 host.go:66] Checking if "addons-322722" exists ...
	I1017 19:23:05.304786  114312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:23:05.304797  114312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:23:05.304786  114312 host.go:66] Checking if "addons-322722" exists ...
	I1017 19:23:05.304813  114312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:23:05.304819  114312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:23:05.304997  114312 config.go:182] Loaded profile config "addons-322722": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:23:05.304787  114312 host.go:66] Checking if "addons-322722" exists ...
	I1017 19:23:05.305178  114312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:23:05.305203  114312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:23:05.305332  114312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:23:05.305352  114312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:23:05.305370  114312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:23:05.305383  114312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:23:05.308363  114312 out.go:179] * Verifying Kubernetes components...
	I1017 19:23:05.309899  114312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:23:05.313800  114312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:23:05.313860  114312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:23:05.330257  114312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42877
	I1017 19:23:05.331073  114312 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:23:05.331704  114312 main.go:141] libmachine: Using API Version  1
	I1017 19:23:05.331727  114312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:23:05.332149  114312 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:23:05.333316  114312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33491
	I1017 19:23:05.333588  114312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:23:05.333621  114312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:23:05.335355  114312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45881
	I1017 19:23:05.339739  114312 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:23:05.341048  114312 main.go:141] libmachine: Using API Version  1
	I1017 19:23:05.341121  114312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:23:05.341310  114312 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:23:05.341945  114312 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:23:05.342010  114312 main.go:141] libmachine: Using API Version  1
	I1017 19:23:05.342025  114312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:23:05.342095  114312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33165
	I1017 19:23:05.342464  114312 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:23:05.342571  114312 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:23:05.342703  114312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:23:05.342738  114312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:23:05.343131  114312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:23:05.343181  114312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:23:05.343142  114312 main.go:141] libmachine: Using API Version  1
	I1017 19:23:05.343240  114312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:23:05.343549  114312 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:23:05.344026  114312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:23:05.344060  114312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:23:05.344709  114312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42311
	I1017 19:23:05.345588  114312 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:23:05.346337  114312 main.go:141] libmachine: Using API Version  1
	I1017 19:23:05.346354  114312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:23:05.350193  114312 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:23:05.350928  114312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:23:05.350973  114312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:23:05.353800  114312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38729
	I1017 19:23:05.356695  114312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43623
	I1017 19:23:05.356701  114312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45643
	I1017 19:23:05.357329  114312 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:23:05.357803  114312 main.go:141] libmachine: Using API Version  1
	I1017 19:23:05.357818  114312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:23:05.358301  114312 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:23:05.358886  114312 main.go:141] libmachine: (addons-322722) Calling .GetState
	I1017 19:23:05.359059  114312 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:23:05.359730  114312 main.go:141] libmachine: Using API Version  1
	I1017 19:23:05.359812  114312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:23:05.359970  114312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45669
	I1017 19:23:05.360207  114312 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:23:05.360826  114312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:23:05.360885  114312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:23:05.362106  114312 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:23:05.363037  114312 main.go:141] libmachine: Using API Version  1
	I1017 19:23:05.363061  114312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:23:05.364220  114312 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:23:05.367006  114312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42245
	I1017 19:23:05.367051  114312 main.go:141] libmachine: Using API Version  1
	I1017 19:23:05.367069  114312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:23:05.367157  114312 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:23:05.367195  114312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45115
	I1017 19:23:05.367392  114312 main.go:141] libmachine: (addons-322722) Calling .GetState
	I1017 19:23:05.368359  114312 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:23:05.368423  114312 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:23:05.369102  114312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:23:05.369146  114312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:23:05.369422  114312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40453
	I1017 19:23:05.369442  114312 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:23:05.369628  114312 main.go:141] libmachine: Using API Version  1
	I1017 19:23:05.369648  114312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:23:05.369825  114312 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:23:05.371027  114312 main.go:141] libmachine: Using API Version  1
	I1017 19:23:05.371052  114312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:23:05.371151  114312 main.go:141] libmachine: Using API Version  1
	I1017 19:23:05.371164  114312 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:23:05.371168  114312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:23:05.371888  114312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39637
	I1017 19:23:05.372135  114312 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:23:05.372521  114312 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:23:05.372986  114312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:23:05.373022  114312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:23:05.373259  114312 main.go:141] libmachine: (addons-322722) Calling .DriverName
	I1017 19:23:05.374139  114312 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-322722"
	I1017 19:23:05.375021  114312 host.go:66] Checking if "addons-322722" exists ...
	I1017 19:23:05.374272  114312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:23:05.377249  114312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:23:05.374820  114312 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:23:05.378239  114312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:23:05.378276  114312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:23:05.380995  114312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39547
	I1017 19:23:05.381261  114312 main.go:141] libmachine: Using API Version  1
	I1017 19:23:05.381274  114312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:23:05.381681  114312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:23:05.381711  114312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:23:05.382896  114312 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1017 19:23:05.385102  114312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35401
	I1017 19:23:05.385179  114312 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1017 19:23:05.385195  114312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1017 19:23:05.385216  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHHostname
	I1017 19:23:05.385630  114312 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:23:05.386234  114312 main.go:141] libmachine: Using API Version  1
	I1017 19:23:05.386251  114312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:23:05.386768  114312 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:23:05.386942  114312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33613
	I1017 19:23:05.387440  114312 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:23:05.387755  114312 main.go:141] libmachine: (addons-322722) Calling .GetState
	I1017 19:23:05.388106  114312 main.go:141] libmachine: (addons-322722) Calling .GetState
	I1017 19:23:05.389969  114312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42553
	I1017 19:23:05.390640  114312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43721
	I1017 19:23:05.395284  114312 main.go:141] libmachine: (addons-322722) Calling .DriverName
	I1017 19:23:05.396326  114312 addons.go:238] Setting addon default-storageclass=true in "addons-322722"
	I1017 19:23:05.396866  114312 host.go:66] Checking if "addons-322722" exists ...
	I1017 19:23:05.397048  114312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34531
	I1017 19:23:05.396009  114312 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:23:05.396072  114312 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:23:05.396265  114312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33861
	I1017 19:23:05.396568  114312 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:23:05.396711  114312 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:23:05.398044  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:23:05.398006  114312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37007
	I1017 19:23:05.398330  114312 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:23:05.398924  114312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:23:05.398970  114312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:23:05.399673  114312 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1017 19:23:05.401726  114312 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1017 19:23:05.401746  114312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1017 19:23:05.401769  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHHostname
	I1017 19:23:05.401963  114312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40391
	I1017 19:23:05.401976  114312 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:23:05.402126  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHPort
	I1017 19:23:05.402161  114312 main.go:141] libmachine: Using API Version  1
	I1017 19:23:05.402177  114312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:23:05.402307  114312 main.go:141] libmachine: Using API Version  1
	I1017 19:23:05.402313  114312 main.go:141] libmachine: (addons-322722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c0:9a", ip: ""} in network mk-addons-322722: {Iface:virbr1 ExpiryTime:2025-10-17 20:22:36 +0000 UTC Type:0 Mac:52:54:00:20:c0:9a Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-322722 Clientid:01:52:54:00:20:c0:9a}
	I1017 19:23:05.402336  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined IP address 192.168.39.86 and MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:23:05.402318  114312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:23:05.402395  114312 main.go:141] libmachine: Using API Version  1
	I1017 19:23:05.402405  114312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:23:05.402421  114312 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:23:05.402522  114312 main.go:141] libmachine: Using API Version  1
	I1017 19:23:05.402532  114312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:23:05.402571  114312 main.go:141] libmachine: Using API Version  1
	I1017 19:23:05.402583  114312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:23:05.403053  114312 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:23:05.403557  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHKeyPath
	I1017 19:23:05.403960  114312 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:23:05.404274  114312 main.go:141] libmachine: (addons-322722) Calling .GetState
	I1017 19:23:05.404286  114312 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:23:05.404329  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHUsername
	I1017 19:23:05.404355  114312 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:23:05.404418  114312 main.go:141] libmachine: Using API Version  1
	I1017 19:23:05.404436  114312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:23:05.405303  114312 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:23:05.405314  114312 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:23:05.405339  114312 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:23:05.405386  114312 main.go:141] libmachine: (addons-322722) Calling .GetState
	I1017 19:23:05.405431  114312 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/addons-322722/id_rsa Username:docker}
	I1017 19:23:05.405552  114312 main.go:141] libmachine: Using API Version  1
	I1017 19:23:05.405566  114312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:23:05.405619  114312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33235
	I1017 19:23:05.405703  114312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:23:05.405738  114312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:23:05.406667  114312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:23:05.406728  114312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:23:05.407723  114312 main.go:141] libmachine: (addons-322722) Calling .DriverName
	I1017 19:23:05.407807  114312 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:23:05.408222  114312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:23:05.408255  114312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:23:05.408540  114312 main.go:141] libmachine: Using API Version  1
	I1017 19:23:05.408555  114312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:23:05.408722  114312 main.go:141] libmachine: (addons-322722) Calling .DriverName
	I1017 19:23:05.408836  114312 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:23:05.409305  114312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:23:05.409339  114312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:23:05.409839  114312 main.go:141] libmachine: Using API Version  1
	I1017 19:23:05.409878  114312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:23:05.410612  114312 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:23:05.410841  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:23:05.410962  114312 main.go:141] libmachine: (addons-322722) Calling .GetState
	I1017 19:23:05.411601  114312 main.go:141] libmachine: (addons-322722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c0:9a", ip: ""} in network mk-addons-322722: {Iface:virbr1 ExpiryTime:2025-10-17 20:22:36 +0000 UTC Type:0 Mac:52:54:00:20:c0:9a Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-322722 Clientid:01:52:54:00:20:c0:9a}
	I1017 19:23:05.411828  114312 main.go:141] libmachine: (addons-322722) Calling .GetState
	I1017 19:23:05.411929  114312 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1017 19:23:05.412128  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined IP address 192.168.39.86 and MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:23:05.412405  114312 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1017 19:23:05.413160  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHPort
	I1017 19:23:05.413370  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHKeyPath
	I1017 19:23:05.413519  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHUsername
	I1017 19:23:05.413624  114312 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/addons-322722/id_rsa Username:docker}
	I1017 19:23:05.414267  114312 host.go:66] Checking if "addons-322722" exists ...
	I1017 19:23:05.414307  114312 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:23:05.414351  114312 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1017 19:23:05.414377  114312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1017 19:23:05.414398  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHHostname
	I1017 19:23:05.414560  114312 main.go:141] libmachine: (addons-322722) Calling .DriverName
	I1017 19:23:05.414708  114312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:23:05.414750  114312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:23:05.414972  114312 out.go:179]   - Using image docker.io/registry:3.0.0
	I1017 19:23:05.415937  114312 main.go:141] libmachine: (addons-322722) Calling .GetState
	I1017 19:23:05.416741  114312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37031
	I1017 19:23:05.417261  114312 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:23:05.418040  114312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39089
	I1017 19:23:05.418144  114312 main.go:141] libmachine: Using API Version  1
	I1017 19:23:05.418166  114312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:23:05.418549  114312 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1017 19:23:05.418573  114312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1017 19:23:05.418593  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHHostname
	I1017 19:23:05.419084  114312 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:23:05.419335  114312 main.go:141] libmachine: (addons-322722) Calling .GetState
	I1017 19:23:05.419569  114312 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1017 19:23:05.420917  114312 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1017 19:23:05.420952  114312 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1017 19:23:05.420980  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHHostname
	I1017 19:23:05.421075  114312 main.go:141] libmachine: (addons-322722) Calling .DriverName
	I1017 19:23:05.421093  114312 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:23:05.421958  114312 main.go:141] libmachine: Using API Version  1
	I1017 19:23:05.421974  114312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:23:05.424987  114312 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:23:05.425016  114312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34121
	I1017 19:23:05.424991  114312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40181
	I1017 19:23:05.425543  114312 main.go:141] libmachine: (addons-322722) Calling .GetState
	I1017 19:23:05.425619  114312 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:23:05.426241  114312 main.go:141] libmachine: Using API Version  1
	I1017 19:23:05.426257  114312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:23:05.426686  114312 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:23:05.427304  114312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:23:05.427344  114312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:23:05.428175  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:23:05.428316  114312 main.go:141] libmachine: (addons-322722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c0:9a", ip: ""} in network mk-addons-322722: {Iface:virbr1 ExpiryTime:2025-10-17 20:22:36 +0000 UTC Type:0 Mac:52:54:00:20:c0:9a Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-322722 Clientid:01:52:54:00:20:c0:9a}
	I1017 19:23:05.428471  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined IP address 192.168.39.86 and MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:23:05.429041  114312 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:23:05.429355  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHPort
	I1017 19:23:05.429598  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHKeyPath
	I1017 19:23:05.429890  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHUsername
	I1017 19:23:05.430049  114312 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/addons-322722/id_rsa Username:docker}
	I1017 19:23:05.430345  114312 main.go:141] libmachine: (addons-322722) Calling .DriverName
	I1017 19:23:05.431630  114312 main.go:141] libmachine: Using API Version  1
	I1017 19:23:05.431648  114312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:23:05.432033  114312 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 19:23:05.432105  114312 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1017 19:23:05.432168  114312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33879
	I1017 19:23:05.432361  114312 main.go:141] libmachine: (addons-322722) Calling .DriverName
	I1017 19:23:05.433279  114312 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:23:05.433424  114312 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:23:05.433939  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:23:05.434038  114312 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 19:23:05.434129  114312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 19:23:05.435121  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHHostname
	I1017 19:23:05.434075  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:23:05.434220  114312 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1017 19:23:05.435193  114312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1017 19:23:05.435209  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHHostname
	I1017 19:23:05.435214  114312 main.go:141] libmachine: (addons-322722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c0:9a", ip: ""} in network mk-addons-322722: {Iface:virbr1 ExpiryTime:2025-10-17 20:22:36 +0000 UTC Type:0 Mac:52:54:00:20:c0:9a Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-322722 Clientid:01:52:54:00:20:c0:9a}
	I1017 19:23:05.434637  114312 main.go:141] libmachine: Using API Version  1
	I1017 19:23:05.435255  114312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:23:05.434798  114312 main.go:141] libmachine: (addons-322722) Calling .GetState
	I1017 19:23:05.435295  114312 main.go:141] libmachine: (addons-322722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c0:9a", ip: ""} in network mk-addons-322722: {Iface:virbr1 ExpiryTime:2025-10-17 20:22:36 +0000 UTC Type:0 Mac:52:54:00:20:c0:9a Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-322722 Clientid:01:52:54:00:20:c0:9a}
	I1017 19:23:05.435311  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined IP address 192.168.39.86 and MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:23:05.436219  114312 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1017 19:23:05.436562  114312 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:23:05.436647  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined IP address 192.168.39.86 and MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:23:05.436943  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHPort
	I1017 19:23:05.437230  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHKeyPath
	I1017 19:23:05.437342  114312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:23:05.437399  114312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:23:05.437507  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHUsername
	I1017 19:23:05.437665  114312 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/addons-322722/id_rsa Username:docker}
	I1017 19:23:05.438413  114312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34605
	I1017 19:23:05.438791  114312 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1017 19:23:05.438812  114312 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1017 19:23:05.438832  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHHostname
	I1017 19:23:05.439125  114312 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:23:05.439692  114312 main.go:141] libmachine: Using API Version  1
	I1017 19:23:05.439715  114312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:23:05.440112  114312 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:23:05.440307  114312 main.go:141] libmachine: (addons-322722) Calling .GetState
	I1017 19:23:05.441428  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHPort
	I1017 19:23:05.441875  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHKeyPath
	I1017 19:23:05.442773  114312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37919
	I1017 19:23:05.443909  114312 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:23:05.444735  114312 main.go:141] libmachine: Using API Version  1
	I1017 19:23:05.445020  114312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:23:05.445124  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHUsername
	I1017 19:23:05.444965  114312 main.go:141] libmachine: (addons-322722) Calling .DriverName
	I1017 19:23:05.444991  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:23:05.445425  114312 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/addons-322722/id_rsa Username:docker}
	I1017 19:23:05.446240  114312 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:23:05.446425  114312 main.go:141] libmachine: (addons-322722) Calling .DriverName
	I1017 19:23:05.446788  114312 main.go:141] libmachine: (addons-322722) Calling .GetState
	I1017 19:23:05.446815  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:23:05.447412  114312 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1017 19:23:05.447596  114312 main.go:141] libmachine: (addons-322722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c0:9a", ip: ""} in network mk-addons-322722: {Iface:virbr1 ExpiryTime:2025-10-17 20:22:36 +0000 UTC Type:0 Mac:52:54:00:20:c0:9a Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-322722 Clientid:01:52:54:00:20:c0:9a}
	I1017 19:23:05.447622  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined IP address 192.168.39.86 and MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:23:05.447785  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHPort
	I1017 19:23:05.447976  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHKeyPath
	I1017 19:23:05.448283  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:23:05.448367  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHUsername
	I1017 19:23:05.448528  114312 main.go:141] libmachine: (addons-322722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c0:9a", ip: ""} in network mk-addons-322722: {Iface:virbr1 ExpiryTime:2025-10-17 20:22:36 +0000 UTC Type:0 Mac:52:54:00:20:c0:9a Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-322722 Clientid:01:52:54:00:20:c0:9a}
	I1017 19:23:05.448600  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined IP address 192.168.39.86 and MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:23:05.448604  114312 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1017 19:23:05.448618  114312 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/addons-322722/id_rsa Username:docker}
	I1017 19:23:05.448805  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHPort
	I1017 19:23:05.449009  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHKeyPath
	I1017 19:23:05.449206  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHUsername
	I1017 19:23:05.449362  114312 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/addons-322722/id_rsa Username:docker}
	I1017 19:23:05.449775  114312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42713
	I1017 19:23:05.449799  114312 main.go:141] libmachine: (addons-322722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c0:9a", ip: ""} in network mk-addons-322722: {Iface:virbr1 ExpiryTime:2025-10-17 20:22:36 +0000 UTC Type:0 Mac:52:54:00:20:c0:9a Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-322722 Clientid:01:52:54:00:20:c0:9a}
	I1017 19:23:05.450022  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined IP address 192.168.39.86 and MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:23:05.450341  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHPort
	I1017 19:23:05.450539  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHKeyPath
	I1017 19:23:05.450768  114312 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:23:05.450975  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHUsername
	I1017 19:23:05.451052  114312 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1017 19:23:05.451277  114312 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1017 19:23:05.451292  114312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1017 19:23:05.451311  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHHostname
	I1017 19:23:05.452024  114312 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/addons-322722/id_rsa Username:docker}
	I1017 19:23:05.452241  114312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42277
	I1017 19:23:05.452626  114312 main.go:141] libmachine: Using API Version  1
	I1017 19:23:05.452684  114312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:23:05.452944  114312 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:23:05.453076  114312 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:23:05.453486  114312 main.go:141] libmachine: Using API Version  1
	I1017 19:23:05.453503  114312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:23:05.453556  114312 main.go:141] libmachine: (addons-322722) Calling .DriverName
	I1017 19:23:05.453682  114312 main.go:141] libmachine: (addons-322722) Calling .DriverName
	I1017 19:23:05.454451  114312 main.go:141] libmachine: Making call to close driver server
	I1017 19:23:05.454488  114312 main.go:141] libmachine: (addons-322722) Calling .Close
	I1017 19:23:05.454526  114312 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1017 19:23:05.454651  114312 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:23:05.454882  114312 main.go:141] libmachine: (addons-322722) Calling .GetState
	I1017 19:23:05.455830  114312 main.go:141] libmachine: (addons-322722) DBG | Closing plugin on server side
	I1017 19:23:05.455909  114312 main.go:141] libmachine: Successfully made call to close driver server
	I1017 19:23:05.456090  114312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 19:23:05.456170  114312 main.go:141] libmachine: Making call to close driver server
	I1017 19:23:05.456181  114312 main.go:141] libmachine: (addons-322722) Calling .Close
	I1017 19:23:05.457664  114312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36215
	I1017 19:23:05.458068  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:23:05.458214  114312 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1017 19:23:05.458448  114312 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:23:05.458638  114312 main.go:141] libmachine: (addons-322722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c0:9a", ip: ""} in network mk-addons-322722: {Iface:virbr1 ExpiryTime:2025-10-17 20:22:36 +0000 UTC Type:0 Mac:52:54:00:20:c0:9a Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-322722 Clientid:01:52:54:00:20:c0:9a}
	I1017 19:23:05.458662  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined IP address 192.168.39.86 and MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:23:05.458894  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHPort
	I1017 19:23:05.459121  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHKeyPath
	I1017 19:23:05.459242  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHUsername
	I1017 19:23:05.459372  114312 main.go:141] libmachine: Using API Version  1
	I1017 19:23:05.459406  114312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:23:05.459400  114312 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/addons-322722/id_rsa Username:docker}
	I1017 19:23:05.459884  114312 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:23:05.459898  114312 main.go:141] libmachine: (addons-322722) DBG | Closing plugin on server side
	I1017 19:23:05.459935  114312 main.go:141] libmachine: Successfully made call to close driver server
	I1017 19:23:05.459944  114312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 19:23:05.460532  114312 main.go:141] libmachine: (addons-322722) Calling .GetState
	W1017 19:23:05.460588  114312 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1017 19:23:05.460812  114312 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1017 19:23:05.461119  114312 main.go:141] libmachine: (addons-322722) Calling .DriverName
	I1017 19:23:05.461501  114312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34425
	I1017 19:23:05.462210  114312 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:23:05.462814  114312 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1017 19:23:05.462816  114312 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1017 19:23:05.463087  114312 main.go:141] libmachine: Using API Version  1
	I1017 19:23:05.463110  114312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:23:05.463449  114312 main.go:141] libmachine: (addons-322722) Calling .DriverName
	I1017 19:23:05.463494  114312 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:23:05.463743  114312 main.go:141] libmachine: (addons-322722) Calling .GetState
	I1017 19:23:05.463961  114312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33721
	I1017 19:23:05.464662  114312 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:23:05.465436  114312 main.go:141] libmachine: Using API Version  1
	I1017 19:23:05.465466  114312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:23:05.465726  114312 main.go:141] libmachine: (addons-322722) Calling .DriverName
	I1017 19:23:05.465867  114312 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:23:05.465845  114312 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1017 19:23:05.466532  114312 main.go:141] libmachine: (addons-322722) Calling .GetState
	I1017 19:23:05.466660  114312 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1017 19:23:05.467524  114312 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1017 19:23:05.467555  114312 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1017 19:23:05.467984  114312 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1017 19:23:05.468008  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHHostname
	I1017 19:23:05.467526  114312 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1017 19:23:05.468621  114312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34575
	I1017 19:23:05.469120  114312 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:23:05.469246  114312 main.go:141] libmachine: (addons-322722) Calling .DriverName
	I1017 19:23:05.469831  114312 main.go:141] libmachine: Using API Version  1
	I1017 19:23:05.470071  114312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:23:05.470496  114312 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:23:05.470513  114312 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1017 19:23:05.470787  114312 main.go:141] libmachine: (addons-322722) Calling .GetState
	I1017 19:23:05.471606  114312 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1017 19:23:05.471614  114312 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1017 19:23:05.471616  114312 out.go:179]   - Using image docker.io/busybox:stable
	I1017 19:23:05.472108  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:23:05.472482  114312 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1017 19:23:05.472502  114312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1017 19:23:05.472500  114312 main.go:141] libmachine: (addons-322722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c0:9a", ip: ""} in network mk-addons-322722: {Iface:virbr1 ExpiryTime:2025-10-17 20:22:36 +0000 UTC Type:0 Mac:52:54:00:20:c0:9a Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-322722 Clientid:01:52:54:00:20:c0:9a}
	I1017 19:23:05.472524  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHHostname
	I1017 19:23:05.472533  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined IP address 192.168.39.86 and MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:23:05.472769  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHPort
	I1017 19:23:05.473015  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHKeyPath
	I1017 19:23:05.473050  114312 main.go:141] libmachine: (addons-322722) Calling .DriverName
	I1017 19:23:05.473241  114312 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1017 19:23:05.473254  114312 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1017 19:23:05.473268  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHHostname
	I1017 19:23:05.473332  114312 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1017 19:23:05.473342  114312 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1017 19:23:05.473345  114312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1017 19:23:05.473350  114312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1017 19:23:05.473363  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHHostname
	I1017 19:23:05.473363  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHHostname
	I1017 19:23:05.473368  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHUsername
	I1017 19:23:05.473384  114312 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 19:23:05.473395  114312 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 19:23:05.473410  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHHostname
	I1017 19:23:05.473518  114312 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/addons-322722/id_rsa Username:docker}
	I1017 19:23:05.480053  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:23:05.480469  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:23:05.480500  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:23:05.480626  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:23:05.480685  114312 main.go:141] libmachine: (addons-322722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c0:9a", ip: ""} in network mk-addons-322722: {Iface:virbr1 ExpiryTime:2025-10-17 20:22:36 +0000 UTC Type:0 Mac:52:54:00:20:c0:9a Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-322722 Clientid:01:52:54:00:20:c0:9a}
	I1017 19:23:05.480726  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:23:05.480766  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined IP address 192.168.39.86 and MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:23:05.480994  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHPort
	I1017 19:23:05.481235  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHKeyPath
	I1017 19:23:05.481267  114312 main.go:141] libmachine: (addons-322722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c0:9a", ip: ""} in network mk-addons-322722: {Iface:virbr1 ExpiryTime:2025-10-17 20:22:36 +0000 UTC Type:0 Mac:52:54:00:20:c0:9a Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-322722 Clientid:01:52:54:00:20:c0:9a}
	I1017 19:23:05.481288  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined IP address 192.168.39.86 and MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:23:05.481560  114312 main.go:141] libmachine: (addons-322722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c0:9a", ip: ""} in network mk-addons-322722: {Iface:virbr1 ExpiryTime:2025-10-17 20:22:36 +0000 UTC Type:0 Mac:52:54:00:20:c0:9a Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-322722 Clientid:01:52:54:00:20:c0:9a}
	I1017 19:23:05.481591  114312 main.go:141] libmachine: (addons-322722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c0:9a", ip: ""} in network mk-addons-322722: {Iface:virbr1 ExpiryTime:2025-10-17 20:22:36 +0000 UTC Type:0 Mac:52:54:00:20:c0:9a Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-322722 Clientid:01:52:54:00:20:c0:9a}
	I1017 19:23:05.481564  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHPort
	I1017 19:23:05.481613  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined IP address 192.168.39.86 and MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:23:05.481615  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined IP address 192.168.39.86 and MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:23:05.481569  114312 main.go:141] libmachine: (addons-322722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c0:9a", ip: ""} in network mk-addons-322722: {Iface:virbr1 ExpiryTime:2025-10-17 20:22:36 +0000 UTC Type:0 Mac:52:54:00:20:c0:9a Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-322722 Clientid:01:52:54:00:20:c0:9a}
	I1017 19:23:05.481636  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined IP address 192.168.39.86 and MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:23:05.481661  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHUsername
	I1017 19:23:05.481692  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHPort
	I1017 19:23:05.481740  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHKeyPath
	I1017 19:23:05.481845  114312 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/addons-322722/id_rsa Username:docker}
	I1017 19:23:05.481878  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHKeyPath
	I1017 19:23:05.481878  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHPort
	I1017 19:23:05.481890  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHUsername
	I1017 19:23:05.482023  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHPort
	I1017 19:23:05.482130  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHKeyPath
	I1017 19:23:05.482141  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHUsername
	I1017 19:23:05.482142  114312 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/addons-322722/id_rsa Username:docker}
	I1017 19:23:05.482255  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHKeyPath
	I1017 19:23:05.482257  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHUsername
	I1017 19:23:05.482279  114312 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/addons-322722/id_rsa Username:docker}
	I1017 19:23:05.482376  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHUsername
	I1017 19:23:05.482445  114312 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/addons-322722/id_rsa Username:docker}
	I1017 19:23:05.482479  114312 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/addons-322722/id_rsa Username:docker}
	I1017 19:23:05.609324  114312 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	W1017 19:23:05.617234  114312 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:40218->192.168.39.86:22: read: connection reset by peer
	I1017 19:23:05.617275  114312 retry.go:31] will retry after 129.966646ms: ssh: handshake failed: read tcp 192.168.39.1:40218->192.168.39.86:22: read: connection reset by peer
	W1017 19:23:05.658696  114312 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:40246->192.168.39.86:22: read: connection reset by peer
	I1017 19:23:05.658733  114312 retry.go:31] will retry after 130.773335ms: ssh: handshake failed: read tcp 192.168.39.1:40246->192.168.39.86:22: read: connection reset by peer
	I1017 19:23:05.716507  114312 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:23:06.082474  114312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1017 19:23:06.087041  114312 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1017 19:23:06.087070  114312 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1017 19:23:06.090306  114312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1017 19:23:06.202387  114312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1017 19:23:06.203217  114312 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1017 19:23:06.203243  114312 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1017 19:23:06.212217  114312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1017 19:23:06.223149  114312 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:23:06.223178  114312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1017 19:23:06.230307  114312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 19:23:06.247238  114312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1017 19:23:06.267059  114312 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1017 19:23:06.267083  114312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1017 19:23:06.268671  114312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1017 19:23:06.340777  114312 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1017 19:23:06.340812  114312 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1017 19:23:06.359791  114312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1017 19:23:06.373519  114312 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1017 19:23:06.373545  114312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1017 19:23:06.612164  114312 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1017 19:23:06.612196  114312 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1017 19:23:06.683935  114312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 19:23:06.687152  114312 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1017 19:23:06.687172  114312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1017 19:23:06.744062  114312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:23:06.818914  114312 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1017 19:23:06.818950  114312 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1017 19:23:06.915504  114312 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1017 19:23:06.915537  114312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1017 19:23:07.005264  114312 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1017 19:23:07.005294  114312 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1017 19:23:07.185399  114312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1017 19:23:07.322018  114312 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1017 19:23:07.322058  114312 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1017 19:23:07.468868  114312 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1017 19:23:07.468900  114312 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1017 19:23:07.755814  114312 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1017 19:23:07.755841  114312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1017 19:23:07.770823  114312 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1017 19:23:07.770860  114312 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1017 19:23:07.846629  114312 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1017 19:23:07.846652  114312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1017 19:23:08.017317  114312 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1017 19:23:08.017342  114312 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1017 19:23:08.191401  114312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1017 19:23:08.193894  114312 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1017 19:23:08.193920  114312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1017 19:23:08.329201  114312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1017 19:23:08.352901  114312 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1017 19:23:08.352951  114312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1017 19:23:08.502329  114312 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1017 19:23:08.502361  114312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1017 19:23:08.784311  114312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1017 19:23:09.009436  114312 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.400065082s)
	I1017 19:23:09.009477  114312 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.292935876s)
	I1017 19:23:09.009485  114312 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1017 19:23:09.009561  114312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.927053856s)
	I1017 19:23:09.009613  114312 main.go:141] libmachine: Making call to close driver server
	I1017 19:23:09.009629  114312 main.go:141] libmachine: (addons-322722) Calling .Close
	I1017 19:23:09.010045  114312 main.go:141] libmachine: (addons-322722) DBG | Closing plugin on server side
	I1017 19:23:09.010080  114312 main.go:141] libmachine: Successfully made call to close driver server
	I1017 19:23:09.010096  114312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 19:23:09.010111  114312 main.go:141] libmachine: Making call to close driver server
	I1017 19:23:09.010118  114312 main.go:141] libmachine: (addons-322722) Calling .Close
	I1017 19:23:09.010358  114312 main.go:141] libmachine: (addons-322722) DBG | Closing plugin on server side
	I1017 19:23:09.010376  114312 main.go:141] libmachine: Successfully made call to close driver server
	I1017 19:23:09.010383  114312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 19:23:09.010412  114312 node_ready.go:35] waiting up to 6m0s for node "addons-322722" to be "Ready" ...
	I1017 19:23:09.020607  114312 node_ready.go:49] node "addons-322722" is "Ready"
	I1017 19:23:09.020645  114312 node_ready.go:38] duration metric: took 10.189163ms for node "addons-322722" to be "Ready" ...
	I1017 19:23:09.020663  114312 api_server.go:52] waiting for apiserver process to appear ...
	I1017 19:23:09.020725  114312 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:23:09.140549  114312 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1017 19:23:09.140572  114312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1017 19:23:09.516003  114312 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-322722" context rescaled to 1 replicas
	I1017 19:23:09.574755  114312 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1017 19:23:09.574781  114312 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1017 19:23:09.732372  114312 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1017 19:23:09.732397  114312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1017 19:23:10.034661  114312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.944318208s)
	I1017 19:23:10.034708  114312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.832277845s)
	I1017 19:23:10.034745  114312 main.go:141] libmachine: Making call to close driver server
	I1017 19:23:10.034749  114312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.822503324s)
	I1017 19:23:10.034758  114312 main.go:141] libmachine: (addons-322722) Calling .Close
	I1017 19:23:10.034784  114312 main.go:141] libmachine: Making call to close driver server
	I1017 19:23:10.034803  114312 main.go:141] libmachine: (addons-322722) Calling .Close
	I1017 19:23:10.034754  114312 main.go:141] libmachine: Making call to close driver server
	I1017 19:23:10.034891  114312 main.go:141] libmachine: (addons-322722) Calling .Close
	I1017 19:23:10.035219  114312 main.go:141] libmachine: Successfully made call to close driver server
	I1017 19:23:10.035235  114312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 19:23:10.035244  114312 main.go:141] libmachine: Making call to close driver server
	I1017 19:23:10.035251  114312 main.go:141] libmachine: (addons-322722) Calling .Close
	I1017 19:23:10.035258  114312 main.go:141] libmachine: Successfully made call to close driver server
	I1017 19:23:10.035278  114312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 19:23:10.035288  114312 main.go:141] libmachine: (addons-322722) DBG | Closing plugin on server side
	I1017 19:23:10.035290  114312 main.go:141] libmachine: Making call to close driver server
	I1017 19:23:10.035350  114312 main.go:141] libmachine: (addons-322722) Calling .Close
	I1017 19:23:10.035290  114312 main.go:141] libmachine: Successfully made call to close driver server
	I1017 19:23:10.035905  114312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 19:23:10.035918  114312 main.go:141] libmachine: Making call to close driver server
	I1017 19:23:10.035926  114312 main.go:141] libmachine: (addons-322722) Calling .Close
	I1017 19:23:10.036747  114312 main.go:141] libmachine: (addons-322722) DBG | Closing plugin on server side
	I1017 19:23:10.036747  114312 main.go:141] libmachine: (addons-322722) DBG | Closing plugin on server side
	I1017 19:23:10.036797  114312 main.go:141] libmachine: (addons-322722) DBG | Closing plugin on server side
	I1017 19:23:10.036798  114312 main.go:141] libmachine: Successfully made call to close driver server
	I1017 19:23:10.036813  114312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 19:23:10.036818  114312 main.go:141] libmachine: Successfully made call to close driver server
	I1017 19:23:10.036825  114312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 19:23:10.036927  114312 main.go:141] libmachine: Successfully made call to close driver server
	I1017 19:23:10.036949  114312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 19:23:10.085469  114312 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1017 19:23:10.085492  114312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1017 19:23:10.498487  114312 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1017 19:23:10.498520  114312 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1017 19:23:10.770931  114312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1017 19:23:12.227396  114312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.997044553s)
	I1017 19:23:12.227480  114312 main.go:141] libmachine: Making call to close driver server
	I1017 19:23:12.227494  114312 main.go:141] libmachine: (addons-322722) Calling .Close
	I1017 19:23:12.227511  114312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.980244447s)
	I1017 19:23:12.227555  114312 main.go:141] libmachine: Making call to close driver server
	I1017 19:23:12.227571  114312 main.go:141] libmachine: (addons-322722) Calling .Close
	I1017 19:23:12.227841  114312 main.go:141] libmachine: Successfully made call to close driver server
	I1017 19:23:12.227882  114312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 19:23:12.227890  114312 main.go:141] libmachine: Successfully made call to close driver server
	I1017 19:23:12.227893  114312 main.go:141] libmachine: Making call to close driver server
	I1017 19:23:12.227902  114312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 19:23:12.227899  114312 main.go:141] libmachine: (addons-322722) DBG | Closing plugin on server side
	I1017 19:23:12.227907  114312 main.go:141] libmachine: (addons-322722) Calling .Close
	I1017 19:23:12.227911  114312 main.go:141] libmachine: Making call to close driver server
	I1017 19:23:12.227919  114312 main.go:141] libmachine: (addons-322722) Calling .Close
	I1017 19:23:12.227891  114312 main.go:141] libmachine: (addons-322722) DBG | Closing plugin on server side
	I1017 19:23:12.228152  114312 main.go:141] libmachine: Successfully made call to close driver server
	I1017 19:23:12.228170  114312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 19:23:12.228172  114312 main.go:141] libmachine: (addons-322722) DBG | Closing plugin on server side
	I1017 19:23:12.228202  114312 main.go:141] libmachine: Successfully made call to close driver server
	I1017 19:23:12.228209  114312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 19:23:12.554531  114312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.285814978s)
	I1017 19:23:12.554596  114312 main.go:141] libmachine: Making call to close driver server
	I1017 19:23:12.554610  114312 main.go:141] libmachine: (addons-322722) Calling .Close
	I1017 19:23:12.555008  114312 main.go:141] libmachine: Successfully made call to close driver server
	I1017 19:23:12.555029  114312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 19:23:12.555038  114312 main.go:141] libmachine: Making call to close driver server
	I1017 19:23:12.555047  114312 main.go:141] libmachine: (addons-322722) Calling .Close
	I1017 19:23:12.555060  114312 main.go:141] libmachine: (addons-322722) DBG | Closing plugin on server side
	I1017 19:23:12.555281  114312 main.go:141] libmachine: Successfully made call to close driver server
	I1017 19:23:12.555301  114312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 19:23:12.765720  114312 main.go:141] libmachine: Making call to close driver server
	I1017 19:23:12.765754  114312 main.go:141] libmachine: (addons-322722) Calling .Close
	I1017 19:23:12.766106  114312 main.go:141] libmachine: (addons-322722) DBG | Closing plugin on server side
	I1017 19:23:12.766127  114312 main.go:141] libmachine: Successfully made call to close driver server
	I1017 19:23:12.766141  114312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 19:23:12.878001  114312 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1017 19:23:12.878041  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHHostname
	I1017 19:23:12.881331  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:23:12.881832  114312 main.go:141] libmachine: (addons-322722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c0:9a", ip: ""} in network mk-addons-322722: {Iface:virbr1 ExpiryTime:2025-10-17 20:22:36 +0000 UTC Type:0 Mac:52:54:00:20:c0:9a Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-322722 Clientid:01:52:54:00:20:c0:9a}
	I1017 19:23:12.881883  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined IP address 192.168.39.86 and MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:23:12.882132  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHPort
	I1017 19:23:12.882374  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHKeyPath
	I1017 19:23:12.882568  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHUsername
	I1017 19:23:12.882745  114312 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/addons-322722/id_rsa Username:docker}
	I1017 19:23:13.418687  114312 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1017 19:23:13.609055  114312 addons.go:238] Setting addon gcp-auth=true in "addons-322722"
	I1017 19:23:13.609126  114312 host.go:66] Checking if "addons-322722" exists ...
	I1017 19:23:13.609596  114312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:23:13.609659  114312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:23:13.624380  114312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36461
	I1017 19:23:13.624926  114312 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:23:13.625404  114312 main.go:141] libmachine: Using API Version  1
	I1017 19:23:13.625428  114312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:23:13.625809  114312 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:23:13.626488  114312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:23:13.626528  114312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:23:13.640212  114312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33533
	I1017 19:23:13.640699  114312 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:23:13.641263  114312 main.go:141] libmachine: Using API Version  1
	I1017 19:23:13.641290  114312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:23:13.641657  114312 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:23:13.641940  114312 main.go:141] libmachine: (addons-322722) Calling .GetState
	I1017 19:23:13.644059  114312 main.go:141] libmachine: (addons-322722) Calling .DriverName
	I1017 19:23:13.644400  114312 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1017 19:23:13.644435  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHHostname
	I1017 19:23:13.648242  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:23:13.648817  114312 main.go:141] libmachine: (addons-322722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:c0:9a", ip: ""} in network mk-addons-322722: {Iface:virbr1 ExpiryTime:2025-10-17 20:22:36 +0000 UTC Type:0 Mac:52:54:00:20:c0:9a Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-322722 Clientid:01:52:54:00:20:c0:9a}
	I1017 19:23:13.648873  114312 main.go:141] libmachine: (addons-322722) DBG | domain addons-322722 has defined IP address 192.168.39.86 and MAC address 52:54:00:20:c0:9a in network mk-addons-322722
	I1017 19:23:13.649087  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHPort
	I1017 19:23:13.649269  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHKeyPath
	I1017 19:23:13.649372  114312 main.go:141] libmachine: (addons-322722) Calling .GetSSHUsername
	I1017 19:23:13.649491  114312 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/addons-322722/id_rsa Username:docker}
	I1017 19:23:14.073009  114312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.713171859s)
	I1017 19:23:14.073045  114312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.389083203s)
	I1017 19:23:14.073068  114312 main.go:141] libmachine: Making call to close driver server
	I1017 19:23:14.073080  114312 main.go:141] libmachine: (addons-322722) Calling .Close
	I1017 19:23:14.073068  114312 main.go:141] libmachine: Making call to close driver server
	I1017 19:23:14.073154  114312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.329055944s)
	I1017 19:23:14.073181  114312 main.go:141] libmachine: (addons-322722) Calling .Close
	I1017 19:23:14.073185  114312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.887750383s)
	W1017 19:23:14.073194  114312 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:23:14.073207  114312 main.go:141] libmachine: Making call to close driver server
	I1017 19:23:14.073218  114312 retry.go:31] will retry after 373.223767ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:23:14.073224  114312 main.go:141] libmachine: (addons-322722) Calling .Close
	I1017 19:23:14.073303  114312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.881871525s)
	I1017 19:23:14.073334  114312 main.go:141] libmachine: Making call to close driver server
	I1017 19:23:14.073343  114312 main.go:141] libmachine: (addons-322722) Calling .Close
	I1017 19:23:14.073346  114312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.744113176s)
	I1017 19:23:14.073370  114312 main.go:141] libmachine: Making call to close driver server
	I1017 19:23:14.073383  114312 main.go:141] libmachine: (addons-322722) Calling .Close
	I1017 19:23:14.073391  114312 main.go:141] libmachine: Successfully made call to close driver server
	I1017 19:23:14.073404  114312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 19:23:14.073415  114312 main.go:141] libmachine: Making call to close driver server
	I1017 19:23:14.073423  114312 main.go:141] libmachine: (addons-322722) Calling .Close
	I1017 19:23:14.073423  114312 main.go:141] libmachine: (addons-322722) DBG | Closing plugin on server side
	I1017 19:23:14.073486  114312 main.go:141] libmachine: (addons-322722) DBG | Closing plugin on server side
	I1017 19:23:14.073513  114312 main.go:141] libmachine: Successfully made call to close driver server
	I1017 19:23:14.073519  114312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 19:23:14.073528  114312 main.go:141] libmachine: Making call to close driver server
	I1017 19:23:14.073540  114312 main.go:141] libmachine: (addons-322722) Calling .Close
	I1017 19:23:14.073556  114312 main.go:141] libmachine: Successfully made call to close driver server
	I1017 19:23:14.073569  114312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 19:23:14.073577  114312 main.go:141] libmachine: Making call to close driver server
	I1017 19:23:14.073584  114312 main.go:141] libmachine: (addons-322722) Calling .Close
	I1017 19:23:14.073650  114312 main.go:141] libmachine: (addons-322722) DBG | Closing plugin on server side
	I1017 19:23:14.073685  114312 main.go:141] libmachine: Successfully made call to close driver server
	I1017 19:23:14.073691  114312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 19:23:14.073697  114312 main.go:141] libmachine: Making call to close driver server
	I1017 19:23:14.073706  114312 main.go:141] libmachine: (addons-322722) Calling .Close
	I1017 19:23:14.073800  114312 main.go:141] libmachine: Successfully made call to close driver server
	I1017 19:23:14.073808  114312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 19:23:14.073818  114312 addons.go:479] Verifying addon ingress=true in "addons-322722"
	I1017 19:23:14.073954  114312 main.go:141] libmachine: (addons-322722) DBG | Closing plugin on server side
	I1017 19:23:14.074048  114312 main.go:141] libmachine: Successfully made call to close driver server
	I1017 19:23:14.074052  114312 main.go:141] libmachine: (addons-322722) DBG | Closing plugin on server side
	I1017 19:23:14.074059  114312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 19:23:14.074067  114312 addons.go:479] Verifying addon registry=true in "addons-322722"
	I1017 19:23:14.074218  114312 main.go:141] libmachine: Successfully made call to close driver server
	I1017 19:23:14.074272  114312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 19:23:14.076358  114312 main.go:141] libmachine: (addons-322722) DBG | Closing plugin on server side
	I1017 19:23:14.076373  114312 main.go:141] libmachine: (addons-322722) DBG | Closing plugin on server side
	I1017 19:23:14.076384  114312 main.go:141] libmachine: Successfully made call to close driver server
	I1017 19:23:14.076394  114312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 19:23:14.076396  114312 main.go:141] libmachine: Successfully made call to close driver server
	I1017 19:23:14.076402  114312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 19:23:14.076409  114312 main.go:141] libmachine: Making call to close driver server
	I1017 19:23:14.076414  114312 main.go:141] libmachine: (addons-322722) Calling .Close
	I1017 19:23:14.076657  114312 main.go:141] libmachine: (addons-322722) DBG | Closing plugin on server side
	I1017 19:23:14.076658  114312 main.go:141] libmachine: Successfully made call to close driver server
	I1017 19:23:14.076897  114312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 19:23:14.076917  114312 addons.go:479] Verifying addon metrics-server=true in "addons-322722"
	I1017 19:23:14.077371  114312 out.go:179] * Verifying registry addon...
	I1017 19:23:14.077444  114312 out.go:179] * Verifying ingress addon...
	I1017 19:23:14.078347  114312 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-322722 service yakd-dashboard -n yakd-dashboard
	
	I1017 19:23:14.080029  114312 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1017 19:23:14.080049  114312 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1017 19:23:14.134607  114312 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1017 19:23:14.134629  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:14.134805  114312 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1017 19:23:14.134832  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:14.170998  114312 main.go:141] libmachine: Making call to close driver server
	I1017 19:23:14.171033  114312 main.go:141] libmachine: (addons-322722) Calling .Close
	I1017 19:23:14.171355  114312 main.go:141] libmachine: Successfully made call to close driver server
	I1017 19:23:14.171379  114312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 19:23:14.171387  114312 main.go:141] libmachine: (addons-322722) DBG | Closing plugin on server side
	I1017 19:23:14.446888  114312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:23:14.701553  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:14.703601  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:14.869105  114312 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.848353949s)
	I1017 19:23:14.869153  114312 api_server.go:72] duration metric: took 9.565825106s to wait for apiserver process to appear ...
	I1017 19:23:14.869163  114312 api_server.go:88] waiting for apiserver healthz status ...
	I1017 19:23:14.869188  114312 api_server.go:253] Checking apiserver healthz at https://192.168.39.86:8443/healthz ...
	I1017 19:23:14.869105  114312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.084701989s)
	W1017 19:23:14.869278  114312 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1017 19:23:14.869314  114312 retry.go:31] will retry after 277.065707ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1017 19:23:14.879210  114312 api_server.go:279] https://192.168.39.86:8443/healthz returned 200:
	ok
	I1017 19:23:14.881121  114312 api_server.go:141] control plane version: v1.34.1
	I1017 19:23:14.881152  114312 api_server.go:131] duration metric: took 11.98194ms to wait for apiserver health ...
	I1017 19:23:14.881163  114312 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 19:23:14.900470  114312 system_pods.go:59] 16 kube-system pods found
	I1017 19:23:14.900509  114312 system_pods.go:61] "amd-gpu-device-plugin-r9jff" [cc7e6d83-4610-419a-8aa1-a9330ed8d26e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1017 19:23:14.900517  114312 system_pods.go:61] "coredns-66bc5c9577-rlkxn" [aeaebc47-3b80-4d07-9123-3b4ba902baf8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:23:14.900527  114312 system_pods.go:61] "coredns-66bc5c9577-wpqvv" [0931beb6-db13-496a-b1b5-5332521ec41e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:23:14.900531  114312 system_pods.go:61] "etcd-addons-322722" [b76da312-5707-40e3-b9a9-87f636777ef1] Running
	I1017 19:23:14.900535  114312 system_pods.go:61] "kube-apiserver-addons-322722" [379023e2-caaf-4b97-aab5-2dd48c4212dc] Running
	I1017 19:23:14.900538  114312 system_pods.go:61] "kube-controller-manager-addons-322722" [1e65a426-8797-46bf-ac5e-25e87907b86f] Running
	I1017 19:23:14.900543  114312 system_pods.go:61] "kube-ingress-dns-minikube" [917eef7b-28f5-4f7a-b3ec-90065894e800] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1017 19:23:14.900546  114312 system_pods.go:61] "kube-proxy-shv79" [8e44318e-d78f-486d-a353-0a93475a7e24] Running
	I1017 19:23:14.900549  114312 system_pods.go:61] "kube-scheduler-addons-322722" [738b940a-cd54-4011-837a-cc9731f78ba0] Running
	I1017 19:23:14.900553  114312 system_pods.go:61] "metrics-server-85b7d694d7-6xm4s" [b19952ba-7948-4378-80c6-cfeb5ca18fd6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 19:23:14.900558  114312 system_pods.go:61] "nvidia-device-plugin-daemonset-5b7p4" [c3831f56-865d-4bf2-bc81-2e3f4aeab7c2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1017 19:23:14.900566  114312 system_pods.go:61] "registry-6b586f9694-n24pg" [98612915-1ff9-4ccb-a7d3-b957aed88735] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1017 19:23:14.900570  114312 system_pods.go:61] "registry-creds-764b6fb674-72hrk" [08e5f446-7d99-4770-b20e-8e06a33ab07e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1017 19:23:14.900575  114312 system_pods.go:61] "registry-proxy-7ntwn" [4004872e-0247-4c72-a17e-5ffef1c90027] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1017 19:23:14.900579  114312 system_pods.go:61] "snapshot-controller-7d9fbc56b8-lqq8r" [3a7f01c8-1260-427f-a348-35b44f022481] Pending
	I1017 19:23:14.900584  114312 system_pods.go:61] "storage-provisioner" [55359186-9bf4-4138-ad08-0a0f3d2686b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 19:23:14.900590  114312 system_pods.go:74] duration metric: took 19.421756ms to wait for pod list to return data ...
	I1017 19:23:14.900600  114312 default_sa.go:34] waiting for default service account to be created ...
	I1017 19:23:14.951514  114312 default_sa.go:45] found service account: "default"
	I1017 19:23:14.951559  114312 default_sa.go:55] duration metric: took 50.94987ms for default service account to be created ...
	I1017 19:23:14.951576  114312 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 19:23:14.998526  114312 system_pods.go:86] 17 kube-system pods found
	I1017 19:23:14.998564  114312 system_pods.go:89] "amd-gpu-device-plugin-r9jff" [cc7e6d83-4610-419a-8aa1-a9330ed8d26e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1017 19:23:14.998572  114312 system_pods.go:89] "coredns-66bc5c9577-rlkxn" [aeaebc47-3b80-4d07-9123-3b4ba902baf8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:23:14.998580  114312 system_pods.go:89] "coredns-66bc5c9577-wpqvv" [0931beb6-db13-496a-b1b5-5332521ec41e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:23:14.998584  114312 system_pods.go:89] "etcd-addons-322722" [b76da312-5707-40e3-b9a9-87f636777ef1] Running
	I1017 19:23:14.998588  114312 system_pods.go:89] "kube-apiserver-addons-322722" [379023e2-caaf-4b97-aab5-2dd48c4212dc] Running
	I1017 19:23:14.998591  114312 system_pods.go:89] "kube-controller-manager-addons-322722" [1e65a426-8797-46bf-ac5e-25e87907b86f] Running
	I1017 19:23:14.998596  114312 system_pods.go:89] "kube-ingress-dns-minikube" [917eef7b-28f5-4f7a-b3ec-90065894e800] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1017 19:23:14.998600  114312 system_pods.go:89] "kube-proxy-shv79" [8e44318e-d78f-486d-a353-0a93475a7e24] Running
	I1017 19:23:14.998605  114312 system_pods.go:89] "kube-scheduler-addons-322722" [738b940a-cd54-4011-837a-cc9731f78ba0] Running
	I1017 19:23:14.998610  114312 system_pods.go:89] "metrics-server-85b7d694d7-6xm4s" [b19952ba-7948-4378-80c6-cfeb5ca18fd6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 19:23:14.998615  114312 system_pods.go:89] "nvidia-device-plugin-daemonset-5b7p4" [c3831f56-865d-4bf2-bc81-2e3f4aeab7c2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1017 19:23:14.998625  114312 system_pods.go:89] "registry-6b586f9694-n24pg" [98612915-1ff9-4ccb-a7d3-b957aed88735] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1017 19:23:14.998630  114312 system_pods.go:89] "registry-creds-764b6fb674-72hrk" [08e5f446-7d99-4770-b20e-8e06a33ab07e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1017 19:23:14.998638  114312 system_pods.go:89] "registry-proxy-7ntwn" [4004872e-0247-4c72-a17e-5ffef1c90027] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1017 19:23:14.998648  114312 system_pods.go:89] "snapshot-controller-7d9fbc56b8-lqq8r" [3a7f01c8-1260-427f-a348-35b44f022481] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 19:23:14.998656  114312 system_pods.go:89] "snapshot-controller-7d9fbc56b8-m4bs8" [af55b4cd-965f-4785-96cc-b7c41557fd16] Pending
	I1017 19:23:14.998664  114312 system_pods.go:89] "storage-provisioner" [55359186-9bf4-4138-ad08-0a0f3d2686b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 19:23:14.998675  114312 system_pods.go:126] duration metric: took 47.091483ms to wait for k8s-apps to be running ...
	I1017 19:23:14.998683  114312 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 19:23:14.998729  114312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:23:15.094673  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:15.094934  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:15.147442  114312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1017 19:23:15.586526  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:15.588158  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:16.140315  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:16.140330  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:16.161193  114312 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.516754699s)
	I1017 19:23:16.162446  114312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.391448542s)
	I1017 19:23:16.162503  114312 main.go:141] libmachine: Making call to close driver server
	I1017 19:23:16.162530  114312 main.go:141] libmachine: (addons-322722) Calling .Close
	I1017 19:23:16.162938  114312 main.go:141] libmachine: Successfully made call to close driver server
	I1017 19:23:16.162960  114312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 19:23:16.162976  114312 main.go:141] libmachine: Making call to close driver server
	I1017 19:23:16.162970  114312 main.go:141] libmachine: (addons-322722) DBG | Closing plugin on server side
	I1017 19:23:16.162984  114312 main.go:141] libmachine: (addons-322722) Calling .Close
	I1017 19:23:16.162999  114312 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1017 19:23:16.163247  114312 main.go:141] libmachine: (addons-322722) DBG | Closing plugin on server side
	I1017 19:23:16.163294  114312 main.go:141] libmachine: Successfully made call to close driver server
	I1017 19:23:16.163303  114312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 19:23:16.163319  114312 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-322722"
	I1017 19:23:16.164843  114312 out.go:179] * Verifying csi-hostpath-driver addon...
	I1017 19:23:16.166374  114312 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1017 19:23:16.167004  114312 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1017 19:23:16.167474  114312 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1017 19:23:16.167495  114312 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1017 19:23:16.196404  114312 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1017 19:23:16.196433  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:16.302769  114312 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1017 19:23:16.302798  114312 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1017 19:23:16.480510  114312 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1017 19:23:16.480542  114312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1017 19:23:16.606809  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:16.610358  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:16.651189  114312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1017 19:23:16.689678  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:17.091676  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:17.093665  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:17.174647  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:17.461733  114312 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.46297632s)
	I1017 19:23:17.461753  114312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.014814849s)
	I1017 19:23:17.461773  114312 system_svc.go:56] duration metric: took 2.463086119s WaitForService to wait for kubelet
	W1017 19:23:17.461797  114312 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:23:17.461782  114312 kubeadm.go:586] duration metric: took 12.158456475s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:23:17.461814  114312 node_conditions.go:102] verifying NodePressure condition ...
	I1017 19:23:17.461825  114312 retry.go:31] will retry after 218.923476ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:23:17.469290  114312 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1017 19:23:17.469328  114312 node_conditions.go:123] node cpu capacity is 2
	I1017 19:23:17.469343  114312 node_conditions.go:105] duration metric: took 7.521371ms to run NodePressure ...
	I1017 19:23:17.469359  114312 start.go:241] waiting for startup goroutines ...
	I1017 19:23:17.589015  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:17.589060  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:17.681566  114312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:23:17.688236  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:17.985530  114312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.838019247s)
	I1017 19:23:17.985612  114312 main.go:141] libmachine: Making call to close driver server
	I1017 19:23:17.985687  114312 main.go:141] libmachine: (addons-322722) Calling .Close
	I1017 19:23:17.986025  114312 main.go:141] libmachine: Successfully made call to close driver server
	I1017 19:23:17.986044  114312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 19:23:17.986053  114312 main.go:141] libmachine: (addons-322722) DBG | Closing plugin on server side
	I1017 19:23:17.986066  114312 main.go:141] libmachine: Making call to close driver server
	I1017 19:23:17.986130  114312 main.go:141] libmachine: (addons-322722) Calling .Close
	I1017 19:23:17.986369  114312 main.go:141] libmachine: Successfully made call to close driver server
	I1017 19:23:17.986383  114312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 19:23:18.085046  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:18.087890  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:18.185577  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:18.513789  114312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.86254525s)
	I1017 19:23:18.513879  114312 main.go:141] libmachine: Making call to close driver server
	I1017 19:23:18.513898  114312 main.go:141] libmachine: (addons-322722) Calling .Close
	I1017 19:23:18.514234  114312 main.go:141] libmachine: Successfully made call to close driver server
	I1017 19:23:18.514252  114312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 19:23:18.514260  114312 main.go:141] libmachine: Making call to close driver server
	I1017 19:23:18.514268  114312 main.go:141] libmachine: (addons-322722) Calling .Close
	I1017 19:23:18.514489  114312 main.go:141] libmachine: Successfully made call to close driver server
	I1017 19:23:18.514513  114312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 19:23:18.515944  114312 addons.go:479] Verifying addon gcp-auth=true in "addons-322722"
	I1017 19:23:18.518511  114312 out.go:179] * Verifying gcp-auth addon...
	I1017 19:23:18.520505  114312 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1017 19:23:18.546527  114312 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1017 19:23:18.546551  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:18.630199  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:18.630257  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:18.674637  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:19.025054  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:19.087559  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:19.089083  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:19.187254  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:19.528101  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:19.586464  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:19.587677  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:19.675664  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:19.774771  114312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.09312854s)
	W1017 19:23:19.774826  114312 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:23:19.774878  114312 retry.go:31] will retry after 328.597812ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:23:20.026798  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:20.087489  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:20.088761  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:20.103800  114312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:23:20.173548  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:20.525494  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:20.586965  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:20.587240  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:20.671264  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 19:23:20.896602  114312 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:23:20.896654  114312 retry.go:31] will retry after 804.431683ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:23:21.027238  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:21.087225  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:21.087361  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:21.173450  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:21.528645  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:21.585682  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:21.591062  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:21.672921  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:21.701964  114312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:23:22.027045  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:22.088152  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:22.088326  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:22.171835  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:22.527771  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:22.585707  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:22.587744  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:22.674844  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:22.898771  114312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.19677156s)
	W1017 19:23:22.898831  114312 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:23:22.898862  114312 retry.go:31] will retry after 1.275754844s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:23:23.025845  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:23.088528  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:23.088786  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:23.172926  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:23.527006  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:23.588898  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:23.589399  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:23.672616  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:24.023734  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:24.084896  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:24.086358  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:24.174627  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:24.174748  114312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:23:24.526244  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:24.584695  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:24.585147  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:24.672665  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:25.026524  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:25.087991  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:25.089341  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:25.171701  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:25.496929  114312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.322147262s)
	W1017 19:23:25.496981  114312 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:23:25.497002  114312 retry.go:31] will retry after 2.69854038s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:23:25.529693  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:25.587194  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:25.587380  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:25.671570  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:26.025543  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:26.088897  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:26.089516  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:26.171906  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:26.524587  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:26.585681  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:26.590979  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:26.671734  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:27.025398  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:27.261299  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:27.266873  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:27.266935  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:27.577951  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:27.594216  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:27.595257  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:27.673082  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:28.024537  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:28.083789  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:28.088322  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:28.173350  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:28.196532  114312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:23:28.526780  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:28.585629  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:28.585920  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:28.968975  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:29.025908  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:29.087050  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:29.087095  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:29.173314  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:29.382670  114312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.186084942s)
	W1017 19:23:29.382715  114312 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:23:29.382747  114312 retry.go:31] will retry after 3.448582305s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:23:29.524244  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:29.588425  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:29.588500  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:29.674100  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:30.106893  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:30.107099  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:30.107312  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:30.172516  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:30.524459  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:30.586555  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:30.589515  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:30.673564  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:31.024253  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:31.086750  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:31.087347  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:31.170526  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:31.524777  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:31.585352  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:31.587244  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:32.165690  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:32.169446  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:32.267305  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:32.267372  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:32.269063  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:32.530329  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:32.591805  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:32.597682  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:32.671346  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:32.832491  114312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:23:33.024614  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:33.087117  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:33.090545  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:33.173245  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:33.525439  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:33.586230  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:33.593284  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:33.677771  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 19:23:33.752189  114312 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:23:33.752233  114312 retry.go:31] will retry after 2.651091558s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:23:34.025875  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:34.087621  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:34.089037  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:34.173035  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:34.526634  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:34.585775  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:34.586584  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:34.672931  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:35.026574  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:35.084323  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:35.085459  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:35.171281  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:35.524875  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:35.584370  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:35.584716  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:35.671224  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:36.025631  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:36.083946  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:36.085093  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:36.170674  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:36.403928  114312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:23:36.526488  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:36.586788  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:36.586818  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:36.671407  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:37.025122  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:37.087578  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:37.090926  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:37.174297  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 19:23:37.328905  114312 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:23:37.328938  114312 retry.go:31] will retry after 5.193345542s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:23:37.524798  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:37.590790  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:37.593421  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:37.675823  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:38.024421  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:38.085307  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:38.086732  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:38.172424  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:38.529230  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:38.584751  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:38.585449  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:38.672175  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:39.025837  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:39.088067  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:39.089903  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:39.174703  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:39.525624  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:39.587180  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:39.587663  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:39.887006  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:40.025121  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:40.084473  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:40.084808  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:40.172390  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:40.526403  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:40.585681  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:40.587052  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:40.670749  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:41.025159  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:41.086061  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:41.086177  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:41.171998  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:41.524910  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:41.584244  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:41.584713  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:41.671348  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:42.028232  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:42.129177  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:42.129621  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:42.171504  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:42.523018  114312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:23:42.524719  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:42.626409  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:42.626725  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:42.670784  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:43.023503  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:43.087642  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:43.087664  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:43.171904  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 19:23:43.236382  114312 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:23:43.236416  114312 retry.go:31] will retry after 12.315576473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:23:43.525731  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:43.585970  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:43.587740  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:43.671205  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:44.024762  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:44.084014  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:44.090403  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:44.170865  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:44.524932  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:44.585493  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:44.586836  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:44.674809  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:45.087504  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:45.087684  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:45.091263  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:45.172677  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:45.524509  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:45.585365  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:45.588219  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:45.670373  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:46.025738  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:46.088715  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:46.088729  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:46.172770  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:46.540429  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:46.590058  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:46.590139  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:46.673263  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:47.026526  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:47.086902  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:47.088124  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:47.174046  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:47.528244  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:47.584987  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:47.585203  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:47.670733  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:48.024029  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:48.084872  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:48.085171  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:48.171420  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:48.525914  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:48.586813  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:48.586897  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:48.673982  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:49.028428  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:49.087476  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:49.088278  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:49.172141  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:49.526460  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:49.586327  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:49.586462  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:49.676403  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:50.024964  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:50.086749  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:50.087117  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:50.172261  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:50.525113  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:50.593656  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:50.594835  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:50.672090  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:51.025128  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:51.084911  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:51.085088  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:51.171200  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:51.524712  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:51.585696  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:51.586482  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:51.671715  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:52.024900  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:52.086668  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:52.087021  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:52.170757  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:52.524712  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:52.584034  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:52.584596  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:52.671662  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:53.026081  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:53.085634  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:53.087410  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:53.172785  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:53.528436  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:53.589719  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:53.589757  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:53.673145  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:54.024688  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:54.083958  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:54.084306  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:54.173493  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:54.525054  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:54.584464  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:54.584844  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:54.672146  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:55.026351  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:55.083979  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:55.085434  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:55.174926  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:55.539472  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:55.552624  114312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:23:55.586118  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:55.589472  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:55.674102  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:56.024945  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:56.084702  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:56.085800  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:56.174512  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 19:23:56.515895  114312 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:23:56.515940  114312 retry.go:31] will retry after 13.835588019s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:23:56.526159  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:56.626593  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:56.626682  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:56.726868  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:57.024558  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:57.084551  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:57.085533  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:57.172413  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:57.524395  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:57.589674  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:57.594549  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:57.671387  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:58.029196  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:58.085982  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:58.088931  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:58.179315  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:58.533322  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:58.592082  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:58.592087  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:58.673147  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:59.028805  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:59.091656  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:59.094953  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:59.174630  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:23:59.537208  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:23:59.596128  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:23:59.599425  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:23:59.684415  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:00.024405  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:00.089780  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:00.090515  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:24:00.173017  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:00.528186  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:00.591923  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:00.591986  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:24:00.670395  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:01.026036  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:01.087134  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:24:01.087373  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:01.171330  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:01.523921  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:01.585541  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:01.585678  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:24:01.671518  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:02.025191  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:02.091223  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:24:02.091950  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:02.189357  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:02.524245  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:02.585105  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:24:02.585212  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:02.671314  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:03.026218  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:03.083146  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:24:03.084417  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:03.171844  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:03.524558  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:03.583838  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:24:03.584866  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:03.671479  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:04.025102  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:04.084099  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:24:04.085642  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:04.171338  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:04.525435  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:04.589242  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:24:04.590203  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:04.671439  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:05.025604  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:05.086585  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:05.089319  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:24:05.172590  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:05.529363  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:05.589691  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:05.590932  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:24:05.672259  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:06.025230  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:06.085256  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:06.086166  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:24:06.170911  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:06.524466  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:06.585465  114312 kapi.go:107] duration metric: took 52.505428063s to wait for kubernetes.io/minikube-addons=registry ...
	I1017 19:24:06.585664  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:06.671772  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:07.024887  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:07.084312  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:07.170923  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:07.524825  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:07.585069  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:07.671677  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:08.024787  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:08.084429  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:08.171726  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:08.525033  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:08.584123  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:08.672198  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:09.026487  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:09.088247  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:09.172247  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:09.526370  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:09.587332  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:09.673093  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:10.027484  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:10.087369  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:10.171200  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:10.352091  114312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:24:10.524145  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:10.588281  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:10.671912  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:11.024100  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:11.380833  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:11.381180  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:11.952841  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:11.954945  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:11.955015  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:12.026283  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:12.086765  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:12.129816  114312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.777678644s)
	W1017 19:24:12.129903  114312 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:24:12.129934  114312 retry.go:31] will retry after 15.51616606s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:24:12.173067  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:12.524661  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:12.588171  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:12.671723  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:13.028329  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:13.084204  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:13.172387  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:13.526292  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:13.627411  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:13.670621  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:14.024817  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:14.088341  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:14.175864  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:14.526882  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:14.586557  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:14.673318  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:15.025619  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:15.125252  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:15.227157  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:15.525439  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:15.583973  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:15.672007  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:16.025823  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:16.084376  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:16.171530  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:16.524955  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:16.585128  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:16.671416  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:17.024907  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:17.086184  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:17.175883  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:17.525452  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:17.584461  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:17.671038  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:18.025151  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:18.084550  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:18.172575  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:18.528967  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:18.586348  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:18.671786  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:19.028378  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:19.085900  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:19.173995  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:19.557191  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:19.585777  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:19.685278  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:20.030924  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:20.086789  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:20.174772  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:20.524105  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:20.584401  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:20.670876  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:21.230494  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:21.230961  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:21.231060  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:21.525411  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:21.625894  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:21.671049  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:22.025986  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:22.084440  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:22.174313  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:22.526189  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:22.584141  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:22.671081  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:23.029096  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:23.132570  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:23.372229  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:23.529268  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:23.631522  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:23.676454  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:24.026186  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:24.083427  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:24.171996  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:24.525158  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:24.588633  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:24.674903  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:25.024076  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:25.084443  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:25.175774  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:25.527132  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:25.584289  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:25.673301  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:26.023269  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:26.083953  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:26.172093  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:26.532637  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:26.584378  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:26.671585  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:27.138141  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:27.139341  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:27.172131  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:27.527575  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:27.588942  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:27.647018  114312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:24:27.672707  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:28.025703  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:28.085801  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:28.171669  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:28.689728  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:28.689782  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:28.690624  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:28.919575  114312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.272516237s)
	W1017 19:24:28.919659  114312 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:24:28.919728  114312 main.go:141] libmachine: Making call to close driver server
	I1017 19:24:28.919744  114312 main.go:141] libmachine: (addons-322722) Calling .Close
	I1017 19:24:28.920061  114312 main.go:141] libmachine: Successfully made call to close driver server
	I1017 19:24:28.920081  114312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 19:24:28.920091  114312 main.go:141] libmachine: Making call to close driver server
	I1017 19:24:28.920099  114312 main.go:141] libmachine: (addons-322722) Calling .Close
	I1017 19:24:28.920365  114312 main.go:141] libmachine: Successfully made call to close driver server
	I1017 19:24:28.920386  114312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 19:24:28.920365  114312 main.go:141] libmachine: (addons-322722) DBG | Closing plugin on server side
	W1017 19:24:28.920486  114312 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1017 19:24:29.026561  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:29.085635  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:29.171531  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:29.525026  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:29.585378  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:29.671723  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:30.120952  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:30.127440  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:30.223618  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:30.524236  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:30.590293  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:30.673424  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:31.026130  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:31.085878  114312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:24:31.171977  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:31.525825  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:31.584972  114312 kapi.go:107] duration metric: took 1m17.504915915s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1017 19:24:31.674554  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:32.027269  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:32.174946  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:32.525473  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:32.672384  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:33.026540  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:33.172437  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:33.525520  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:33.675697  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:34.033216  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:34.173358  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:34.524502  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:34.675237  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:35.025025  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:35.174756  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:35.526298  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:35.671335  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:36.024940  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:36.170784  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:36.525735  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:36.675806  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:37.025599  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:37.171606  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:24:37.525669  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:37.672213  114312 kapi.go:107] duration metric: took 1m21.505205433s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1017 19:24:38.025244  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:38.525677  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:39.024654  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:39.527712  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:40.024294  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:40.524513  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:41.025443  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:41.525517  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:42.024617  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:42.524566  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:43.024697  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:43.581621  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:44.024920  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:44.523728  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:45.025220  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:45.526438  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:46.024837  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:46.524530  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:47.024882  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:47.524900  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:48.025001  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:48.525251  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:49.025198  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:49.524153  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:50.024298  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:50.525422  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:51.024810  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:51.524937  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:52.025392  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:52.524150  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:53.023718  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:53.524052  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:54.028266  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:54.524525  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:55.024334  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:55.526245  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:56.024468  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:56.524482  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:57.024887  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:57.524552  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:58.028310  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:58.525386  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:59.024199  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:24:59.524046  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:00.024936  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:00.525952  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:01.025207  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:01.525107  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:02.028302  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:02.524460  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:03.024307  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:03.524591  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:04.025030  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:04.524976  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:05.023971  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:05.525664  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:06.024907  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:06.524296  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:07.024413  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:07.524298  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:08.024721  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:08.524518  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:09.025321  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:09.525699  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:10.025045  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:10.525693  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:11.024683  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:11.524134  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:12.024047  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:12.525652  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:13.024769  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:13.525285  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:14.024963  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:14.523896  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:15.025082  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:15.527940  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:16.025555  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:16.524136  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:17.025220  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:17.524758  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:18.025370  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:18.524699  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:19.025191  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:19.524214  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:20.025430  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:20.525464  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:21.024923  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:21.524251  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:22.029251  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:22.525071  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:23.025833  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:23.525171  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:24.024668  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:24.524647  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:25.025460  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:25.525508  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:26.025463  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:26.523666  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:27.025107  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:27.524076  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:28.024889  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:28.525308  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:29.024509  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:29.524401  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:30.024785  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:30.525444  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:31.025527  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:31.524107  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:32.024481  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:32.525038  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:33.023861  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:33.524488  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:34.026908  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:34.525838  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:35.026318  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:35.527330  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:36.025250  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:36.529552  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:37.029050  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:37.527047  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:38.023754  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:38.528097  114312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:25:39.030781  114312 kapi.go:107] duration metric: took 2m20.510271914s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1017 19:25:39.032279  114312 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-322722 cluster.
	I1017 19:25:39.033737  114312 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1017 19:25:39.035099  114312 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1017 19:25:39.036705  114312 out.go:179] * Enabled addons: amd-gpu-device-plugin, cloud-spanner, registry-creds, nvidia-device-plugin, storage-provisioner, ingress-dns, storage-provisioner-rancher, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1017 19:25:39.038214  114312 addons.go:514] duration metric: took 2m33.734858692s for enable addons: enabled=[amd-gpu-device-plugin cloud-spanner registry-creds nvidia-device-plugin storage-provisioner ingress-dns storage-provisioner-rancher metrics-server yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1017 19:25:39.038279  114312 start.go:246] waiting for cluster config update ...
	I1017 19:25:39.038335  114312 start.go:255] writing updated cluster config ...
	I1017 19:25:39.038652  114312 ssh_runner.go:195] Run: rm -f paused
	I1017 19:25:39.046649  114312 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 19:25:39.050459  114312 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wpqvv" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:25:39.056764  114312 pod_ready.go:94] pod "coredns-66bc5c9577-wpqvv" is "Ready"
	I1017 19:25:39.056788  114312 pod_ready.go:86] duration metric: took 6.298612ms for pod "coredns-66bc5c9577-wpqvv" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:25:39.059301  114312 pod_ready.go:83] waiting for pod "etcd-addons-322722" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:25:39.064396  114312 pod_ready.go:94] pod "etcd-addons-322722" is "Ready"
	I1017 19:25:39.064416  114312 pod_ready.go:86] duration metric: took 5.089145ms for pod "etcd-addons-322722" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:25:39.067226  114312 pod_ready.go:83] waiting for pod "kube-apiserver-addons-322722" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:25:39.072609  114312 pod_ready.go:94] pod "kube-apiserver-addons-322722" is "Ready"
	I1017 19:25:39.072639  114312 pod_ready.go:86] duration metric: took 5.390671ms for pod "kube-apiserver-addons-322722" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:25:39.074766  114312 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-322722" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:25:39.451142  114312 pod_ready.go:94] pod "kube-controller-manager-addons-322722" is "Ready"
	I1017 19:25:39.451183  114312 pod_ready.go:86] duration metric: took 376.39157ms for pod "kube-controller-manager-addons-322722" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:25:39.652277  114312 pod_ready.go:83] waiting for pod "kube-proxy-shv79" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:25:40.052203  114312 pod_ready.go:94] pod "kube-proxy-shv79" is "Ready"
	I1017 19:25:40.052231  114312 pod_ready.go:86] duration metric: took 399.929975ms for pod "kube-proxy-shv79" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:25:40.252153  114312 pod_ready.go:83] waiting for pod "kube-scheduler-addons-322722" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:25:40.651142  114312 pod_ready.go:94] pod "kube-scheduler-addons-322722" is "Ready"
	I1017 19:25:40.651170  114312 pod_ready.go:86] duration metric: took 398.993192ms for pod "kube-scheduler-addons-322722" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:25:40.651181  114312 pod_ready.go:40] duration metric: took 1.604485818s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 19:25:40.695144  114312 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1017 19:25:40.696838  114312 out.go:179] * Done! kubectl is now configured to use "addons-322722" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 17 19:28:49 addons-322722 crio[816]: time="2025-10-17 19:28:49.335150734Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760729329335120159,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598025,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ddffeb53-99fd-48f8-9c6f-fc2754a9a07a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 17 19:28:49 addons-322722 crio[816]: time="2025-10-17 19:28:49.335884352Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a883b39f-f974-4420-a8ea-81a44d78549d name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 19:28:49 addons-322722 crio[816]: time="2025-10-17 19:28:49.335937883Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a883b39f-f974-4420-a8ea-81a44d78549d name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 19:28:49 addons-322722 crio[816]: time="2025-10-17 19:28:49.336264189Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:995362c034f5bd87f373bd8595bc6fc99dd236a4caf4791b55586d0466a8c300,PodSandboxId:eff911552043969d1eb67bf797ba84364a448b205ac1d469e7d12dbbb2141190,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760729186821541623,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9731b390-9ba2-425a-887f-65322578dfef,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae027d4de331f3750037a7465c7d83aaf9a2216a0997b6560dc2643ec0b80ab4,PodSandboxId:caf401d5a728ae35ecdff8e06204740ee42b9b93a5b7f1ce32352eaa0acb076e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760729145147329523,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dbcabfe6-793a-4be4-85b5-9d1a4812477c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ddf01b0f7484943e3ffebce8e4e64a1109ab96c6dcad5b77b4fd15a14bd3ea1,PodSandboxId:b4deb8ca1500bf063f195cb3ccf9926f98e8a0077003d8c9a43bd64ee93ea7fd,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760729070334196337,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-jxfph,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0a1c39f2-fd97-4f69-82f3-0210bfbb2d13,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:c9e95028cf73d7293b3dc51695c532c94b4a0bffcfd8cdc27e6b458c6201b529,PodSandboxId:14d88c757e67e5b7d20f388e3203201b89754a6139a3a4afb833ed4875829eef,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302fe
afeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760729057860778239,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-lw94z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fd6a75f7-cd1f-4133-a5ee-69522ba6981a,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4f323f8756327bc9ef1382e9a50d1244136804fab8b58affddeee955f8ec496,PodSandboxId:42015c6e60fb985bbb55e0b7655c012188f9004ffa5bcaadeb113009e649da2a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760729053420454582,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-28kvz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55ed35ec-d03c-4165-998b-d6348f545d0b,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdd84d572e738d424e8a81000226c8319df879eba060a34e38253aafd0ed14f3,PodSandboxId:e750b38270f77a2187f1ecc5219263cf2ac57f3bab0d885fc3b59310cfbed695,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760729042061708242,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-gxw4w,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: fb45b0c7-be11-4878-bd15-b67e53ad4770,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc1294b144c753d357e88caf85e50bdec3c918bd7125719e49822fdef7169e01,PodSandboxId:8175cd16511ba154e81dc608ec54847b9dc51c54485783ae07f3698ac36150a9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760729030719091607,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 917eef7b-28f5-4f7a-b3ec-90065894e800,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9162e73ec2ae84ecaf1ba54d655bf7583934d31b2eb25449eb9d3984f723171e,PodSandboxId:c040c4a7f4cf7a191497fd028305868331a44a2b18173bb
732cd3b91474c00fd,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760728997134201567,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-r9jff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc7e6d83-4610-419a-8aa1-a9330ed8d26e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f4939150cb20db2911faeb634ad75560402c504421cc7b0546a324eb3468519,PodSandboxId:2f0f91a
36d1622326fe78b0e26ba270958d698ff8493573c3298492aa67084cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760728996422312351,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55359186-9bf4-4138-ad08-0a0f3d2686b8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeb7e68e07e3ac9745704841470c1e2f63619196e7c89681b8469e4dc90d03a7,PodSandboxId:72d93c2167a95bc09e2
17d7f06331540be5b99e80aba8f0a33a6b7624bf27b36,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760728987528751507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wpqvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0931beb6-db13-496a-b1b5-5332521ec41e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23dfbfcb1861333c02a4df8399a281e5373e9e08cd281ae79a7298d55a4ad0e9,PodSandboxId:761bea054314e59b63653ecc4b394a808eebf8ec38e02cb636520e2b40fbf1ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760728986835296636,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shv79,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e44318e-d78f-486d-a353-0a93475a7e24,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae5174a921cc0e224f41eccf93177c0e1b128a989a02694514c5947131522dd5,PodSandboxId:9cbc9dcd2ba494216c5f4bc1c4701be97669561394bcf7c88f94564fb8430cb6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760728974853149975,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-322722,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8a50361a580b064fcc3c8b39a29bf68,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"ho
stPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855441592d227af5caf46a3da7cad48bfcf559d0fbd870e55efbc8d46b65d2b5,PodSandboxId:2b7f0348b4ea0a984ade9617a1fb5255b54f59733ca2c246ce5a6d8f4fd4c533,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760728974839398865,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-322722,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7227217edc4a0ef672ffcd64911706aa,},Annotations:map[string]str
ing{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59edec2aae40e770671276d501e4f9f8da8d8e171bdf306d44167cd656bc723d,PodSandboxId:7d31d357b80c06c81968d6a9584cf9517e72a0ce76af51808fd677e0d43215a0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760728974845380051,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-3
22722,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab9751e6e6de237aec6ecb4f862eb778,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1580db0b1cfb1233d77efe9ef7d822429cefca9b5cd4922ccc4b5b7ee013180,PodSandboxId:098a1f74122bc0dcce32c94f5342798c1b2bc2b90152d3328301376a268df983,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760728974864348590,Labels:map[
string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-322722,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cb16faed1188f7af949f558b90f162d,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a883b39f-f974-4420-a8ea-81a44d78549d name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 19:28:49 addons-322722 crio[816]: time="2025-10-17 19:28:49.377134604Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3c792d37-a1bc-4435-802f-7dd522c482aa name=/runtime.v1.RuntimeService/Version
	Oct 17 19:28:49 addons-322722 crio[816]: time="2025-10-17 19:28:49.377226725Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3c792d37-a1bc-4435-802f-7dd522c482aa name=/runtime.v1.RuntimeService/Version
	Oct 17 19:28:49 addons-322722 crio[816]: time="2025-10-17 19:28:49.378662790Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4088e28f-ce6b-4485-9c5b-c937523e97d1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 17 19:28:49 addons-322722 crio[816]: time="2025-10-17 19:28:49.380020954Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760729329379943361,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598025,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4088e28f-ce6b-4485-9c5b-c937523e97d1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 17 19:28:49 addons-322722 crio[816]: time="2025-10-17 19:28:49.380735844Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c5996441-869c-4674-897d-2d6c962f603d name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 19:28:49 addons-322722 crio[816]: time="2025-10-17 19:28:49.380862544Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c5996441-869c-4674-897d-2d6c962f603d name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 19:28:49 addons-322722 crio[816]: time="2025-10-17 19:28:49.381675292Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:995362c034f5bd87f373bd8595bc6fc99dd236a4caf4791b55586d0466a8c300,PodSandboxId:eff911552043969d1eb67bf797ba84364a448b205ac1d469e7d12dbbb2141190,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760729186821541623,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9731b390-9ba2-425a-887f-65322578dfef,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae027d4de331f3750037a7465c7d83aaf9a2216a0997b6560dc2643ec0b80ab4,PodSandboxId:caf401d5a728ae35ecdff8e06204740ee42b9b93a5b7f1ce32352eaa0acb076e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760729145147329523,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dbcabfe6-793a-4be4-85b5-9d1a4812477c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ddf01b0f7484943e3ffebce8e4e64a1109ab96c6dcad5b77b4fd15a14bd3ea1,PodSandboxId:b4deb8ca1500bf063f195cb3ccf9926f98e8a0077003d8c9a43bd64ee93ea7fd,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760729070334196337,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-jxfph,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0a1c39f2-fd97-4f69-82f3-0210bfbb2d13,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:c9e95028cf73d7293b3dc51695c532c94b4a0bffcfd8cdc27e6b458c6201b529,PodSandboxId:14d88c757e67e5b7d20f388e3203201b89754a6139a3a4afb833ed4875829eef,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302fe
afeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760729057860778239,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-lw94z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fd6a75f7-cd1f-4133-a5ee-69522ba6981a,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4f323f8756327bc9ef1382e9a50d1244136804fab8b58affddeee955f8ec496,PodSandboxId:42015c6e60fb985bbb55e0b7655c012188f9004ffa5bcaadeb113009e649da2a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760729053420454582,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-28kvz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55ed35ec-d03c-4165-998b-d6348f545d0b,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdd84d572e738d424e8a81000226c8319df879eba060a34e38253aafd0ed14f3,PodSandboxId:e750b38270f77a2187f1ecc5219263cf2ac57f3bab0d885fc3b59310cfbed695,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760729042061708242,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-gxw4w,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: fb45b0c7-be11-4878-bd15-b67e53ad4770,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc1294b144c753d357e88caf85e50bdec3c918bd7125719e49822fdef7169e01,PodSandboxId:8175cd16511ba154e81dc608ec54847b9dc51c54485783ae07f3698ac36150a9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760729030719091607,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 917eef7b-28f5-4f7a-b3ec-90065894e800,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9162e73ec2ae84ecaf1ba54d655bf7583934d31b2eb25449eb9d3984f723171e,PodSandboxId:c040c4a7f4cf7a191497fd028305868331a44a2b18173bb
732cd3b91474c00fd,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760728997134201567,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-r9jff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc7e6d83-4610-419a-8aa1-a9330ed8d26e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f4939150cb20db2911faeb634ad75560402c504421cc7b0546a324eb3468519,PodSandboxId:2f0f91a
36d1622326fe78b0e26ba270958d698ff8493573c3298492aa67084cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760728996422312351,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55359186-9bf4-4138-ad08-0a0f3d2686b8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeb7e68e07e3ac9745704841470c1e2f63619196e7c89681b8469e4dc90d03a7,PodSandboxId:72d93c2167a95bc09e2
17d7f06331540be5b99e80aba8f0a33a6b7624bf27b36,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760728987528751507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wpqvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0931beb6-db13-496a-b1b5-5332521ec41e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23dfbfcb1861333c02a4df8399a281e5373e9e08cd281ae79a7298d55a4ad0e9,PodSandboxId:761bea054314e59b63653ecc4b394a808eebf8ec38e02cb636520e2b40fbf1ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760728986835296636,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shv79,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e44318e-d78f-486d-a353-0a93475a7e24,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae5174a921cc0e224f41eccf93177c0e1b128a989a02694514c5947131522dd5,PodSandboxId:9cbc9dcd2ba494216c5f4bc1c4701be97669561394bcf7c88f94564fb8430cb6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760728974853149975,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-322722,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8a50361a580b064fcc3c8b39a29bf68,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"ho
stPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855441592d227af5caf46a3da7cad48bfcf559d0fbd870e55efbc8d46b65d2b5,PodSandboxId:2b7f0348b4ea0a984ade9617a1fb5255b54f59733ca2c246ce5a6d8f4fd4c533,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760728974839398865,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-322722,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7227217edc4a0ef672ffcd64911706aa,},Annotations:map[string]str
ing{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59edec2aae40e770671276d501e4f9f8da8d8e171bdf306d44167cd656bc723d,PodSandboxId:7d31d357b80c06c81968d6a9584cf9517e72a0ce76af51808fd677e0d43215a0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760728974845380051,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-3
22722,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab9751e6e6de237aec6ecb4f862eb778,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1580db0b1cfb1233d77efe9ef7d822429cefca9b5cd4922ccc4b5b7ee013180,PodSandboxId:098a1f74122bc0dcce32c94f5342798c1b2bc2b90152d3328301376a268df983,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760728974864348590,Labels:map[
string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-322722,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cb16faed1188f7af949f558b90f162d,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c5996441-869c-4674-897d-2d6c962f603d name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 19:28:49 addons-322722 crio[816]: time="2025-10-17 19:28:49.418405871Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f0bae798-92b1-433d-b9f6-0aaa55d943a3 name=/runtime.v1.RuntimeService/Version
	Oct 17 19:28:49 addons-322722 crio[816]: time="2025-10-17 19:28:49.418495190Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f0bae798-92b1-433d-b9f6-0aaa55d943a3 name=/runtime.v1.RuntimeService/Version
	Oct 17 19:28:49 addons-322722 crio[816]: time="2025-10-17 19:28:49.420154298Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=46c7afa8-3052-42a6-9950-a3ad08cd1f4b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 17 19:28:49 addons-322722 crio[816]: time="2025-10-17 19:28:49.421407327Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760729329421375674,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598025,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=46c7afa8-3052-42a6-9950-a3ad08cd1f4b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 17 19:28:49 addons-322722 crio[816]: time="2025-10-17 19:28:49.422001196Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9ee6e8d3-8d87-4bad-b110-96ca3179c058 name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 19:28:49 addons-322722 crio[816]: time="2025-10-17 19:28:49.422069599Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9ee6e8d3-8d87-4bad-b110-96ca3179c058 name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 19:28:49 addons-322722 crio[816]: time="2025-10-17 19:28:49.422445283Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:995362c034f5bd87f373bd8595bc6fc99dd236a4caf4791b55586d0466a8c300,PodSandboxId:eff911552043969d1eb67bf797ba84364a448b205ac1d469e7d12dbbb2141190,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760729186821541623,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9731b390-9ba2-425a-887f-65322578dfef,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae027d4de331f3750037a7465c7d83aaf9a2216a0997b6560dc2643ec0b80ab4,PodSandboxId:caf401d5a728ae35ecdff8e06204740ee42b9b93a5b7f1ce32352eaa0acb076e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760729145147329523,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dbcabfe6-793a-4be4-85b5-9d1a4812477c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ddf01b0f7484943e3ffebce8e4e64a1109ab96c6dcad5b77b4fd15a14bd3ea1,PodSandboxId:b4deb8ca1500bf063f195cb3ccf9926f98e8a0077003d8c9a43bd64ee93ea7fd,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760729070334196337,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-jxfph,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0a1c39f2-fd97-4f69-82f3-0210bfbb2d13,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:c9e95028cf73d7293b3dc51695c532c94b4a0bffcfd8cdc27e6b458c6201b529,PodSandboxId:14d88c757e67e5b7d20f388e3203201b89754a6139a3a4afb833ed4875829eef,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302fe
afeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760729057860778239,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-lw94z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fd6a75f7-cd1f-4133-a5ee-69522ba6981a,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4f323f8756327bc9ef1382e9a50d1244136804fab8b58affddeee955f8ec496,PodSandboxId:42015c6e60fb985bbb55e0b7655c012188f9004ffa5bcaadeb113009e649da2a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760729053420454582,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-28kvz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55ed35ec-d03c-4165-998b-d6348f545d0b,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdd84d572e738d424e8a81000226c8319df879eba060a34e38253aafd0ed14f3,PodSandboxId:e750b38270f77a2187f1ecc5219263cf2ac57f3bab0d885fc3b59310cfbed695,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760729042061708242,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-gxw4w,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: fb45b0c7-be11-4878-bd15-b67e53ad4770,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc1294b144c753d357e88caf85e50bdec3c918bd7125719e49822fdef7169e01,PodSandboxId:8175cd16511ba154e81dc608ec54847b9dc51c54485783ae07f3698ac36150a9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760729030719091607,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 917eef7b-28f5-4f7a-b3ec-90065894e800,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9162e73ec2ae84ecaf1ba54d655bf7583934d31b2eb25449eb9d3984f723171e,PodSandboxId:c040c4a7f4cf7a191497fd028305868331a44a2b18173bb
732cd3b91474c00fd,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760728997134201567,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-r9jff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc7e6d83-4610-419a-8aa1-a9330ed8d26e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f4939150cb20db2911faeb634ad75560402c504421cc7b0546a324eb3468519,PodSandboxId:2f0f91a
36d1622326fe78b0e26ba270958d698ff8493573c3298492aa67084cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760728996422312351,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55359186-9bf4-4138-ad08-0a0f3d2686b8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeb7e68e07e3ac9745704841470c1e2f63619196e7c89681b8469e4dc90d03a7,PodSandboxId:72d93c2167a95bc09e2
17d7f06331540be5b99e80aba8f0a33a6b7624bf27b36,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760728987528751507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wpqvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0931beb6-db13-496a-b1b5-5332521ec41e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23dfbfcb1861333c02a4df8399a281e5373e9e08cd281ae79a7298d55a4ad0e9,PodSandboxId:761bea054314e59b63653ecc4b394a808eebf8ec38e02cb636520e2b40fbf1ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760728986835296636,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shv79,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e44318e-d78f-486d-a353-0a93475a7e24,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae5174a921cc0e224f41eccf93177c0e1b128a989a02694514c5947131522dd5,PodSandboxId:9cbc9dcd2ba494216c5f4bc1c4701be97669561394bcf7c88f94564fb8430cb6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760728974853149975,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-322722,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8a50361a580b064fcc3c8b39a29bf68,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"ho
stPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855441592d227af5caf46a3da7cad48bfcf559d0fbd870e55efbc8d46b65d2b5,PodSandboxId:2b7f0348b4ea0a984ade9617a1fb5255b54f59733ca2c246ce5a6d8f4fd4c533,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760728974839398865,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-322722,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7227217edc4a0ef672ffcd64911706aa,},Annotations:map[string]str
ing{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59edec2aae40e770671276d501e4f9f8da8d8e171bdf306d44167cd656bc723d,PodSandboxId:7d31d357b80c06c81968d6a9584cf9517e72a0ce76af51808fd677e0d43215a0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760728974845380051,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-3
22722,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab9751e6e6de237aec6ecb4f862eb778,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1580db0b1cfb1233d77efe9ef7d822429cefca9b5cd4922ccc4b5b7ee013180,PodSandboxId:098a1f74122bc0dcce32c94f5342798c1b2bc2b90152d3328301376a268df983,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760728974864348590,Labels:map[
string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-322722,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cb16faed1188f7af949f558b90f162d,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9ee6e8d3-8d87-4bad-b110-96ca3179c058 name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 19:28:49 addons-322722 crio[816]: time="2025-10-17 19:28:49.462931879Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f2144575-ed15-4593-a5af-ed97ebcb6813 name=/runtime.v1.RuntimeService/Version
	Oct 17 19:28:49 addons-322722 crio[816]: time="2025-10-17 19:28:49.463027293Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f2144575-ed15-4593-a5af-ed97ebcb6813 name=/runtime.v1.RuntimeService/Version
	Oct 17 19:28:49 addons-322722 crio[816]: time="2025-10-17 19:28:49.464912869Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=83f35bd0-c2e1-4355-964e-7aa4451a2cd8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 17 19:28:49 addons-322722 crio[816]: time="2025-10-17 19:28:49.467725654Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760729329467636207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598025,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=83f35bd0-c2e1-4355-964e-7aa4451a2cd8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 17 19:28:49 addons-322722 crio[816]: time="2025-10-17 19:28:49.468480661Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f4b1faf1-cb75-48a1-99c8-36e6dfbc7f5d name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 19:28:49 addons-322722 crio[816]: time="2025-10-17 19:28:49.468541894Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f4b1faf1-cb75-48a1-99c8-36e6dfbc7f5d name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 19:28:49 addons-322722 crio[816]: time="2025-10-17 19:28:49.468913052Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:995362c034f5bd87f373bd8595bc6fc99dd236a4caf4791b55586d0466a8c300,PodSandboxId:eff911552043969d1eb67bf797ba84364a448b205ac1d469e7d12dbbb2141190,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760729186821541623,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9731b390-9ba2-425a-887f-65322578dfef,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae027d4de331f3750037a7465c7d83aaf9a2216a0997b6560dc2643ec0b80ab4,PodSandboxId:caf401d5a728ae35ecdff8e06204740ee42b9b93a5b7f1ce32352eaa0acb076e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760729145147329523,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dbcabfe6-793a-4be4-85b5-9d1a4812477c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ddf01b0f7484943e3ffebce8e4e64a1109ab96c6dcad5b77b4fd15a14bd3ea1,PodSandboxId:b4deb8ca1500bf063f195cb3ccf9926f98e8a0077003d8c9a43bd64ee93ea7fd,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760729070334196337,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-jxfph,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0a1c39f2-fd97-4f69-82f3-0210bfbb2d13,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:c9e95028cf73d7293b3dc51695c532c94b4a0bffcfd8cdc27e6b458c6201b529,PodSandboxId:14d88c757e67e5b7d20f388e3203201b89754a6139a3a4afb833ed4875829eef,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302fe
afeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760729057860778239,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-lw94z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fd6a75f7-cd1f-4133-a5ee-69522ba6981a,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4f323f8756327bc9ef1382e9a50d1244136804fab8b58affddeee955f8ec496,PodSandboxId:42015c6e60fb985bbb55e0b7655c012188f9004ffa5bcaadeb113009e649da2a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760729053420454582,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-28kvz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55ed35ec-d03c-4165-998b-d6348f545d0b,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdd84d572e738d424e8a81000226c8319df879eba060a34e38253aafd0ed14f3,PodSandboxId:e750b38270f77a2187f1ecc5219263cf2ac57f3bab0d885fc3b59310cfbed695,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760729042061708242,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-gxw4w,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: fb45b0c7-be11-4878-bd15-b67e53ad4770,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc1294b144c753d357e88caf85e50bdec3c918bd7125719e49822fdef7169e01,PodSandboxId:8175cd16511ba154e81dc608ec54847b9dc51c54485783ae07f3698ac36150a9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760729030719091607,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 917eef7b-28f5-4f7a-b3ec-90065894e800,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9162e73ec2ae84ecaf1ba54d655bf7583934d31b2eb25449eb9d3984f723171e,PodSandboxId:c040c4a7f4cf7a191497fd028305868331a44a2b18173bb
732cd3b91474c00fd,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760728997134201567,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-r9jff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc7e6d83-4610-419a-8aa1-a9330ed8d26e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f4939150cb20db2911faeb634ad75560402c504421cc7b0546a324eb3468519,PodSandboxId:2f0f91a
36d1622326fe78b0e26ba270958d698ff8493573c3298492aa67084cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760728996422312351,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55359186-9bf4-4138-ad08-0a0f3d2686b8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeb7e68e07e3ac9745704841470c1e2f63619196e7c89681b8469e4dc90d03a7,PodSandboxId:72d93c2167a95bc09e2
17d7f06331540be5b99e80aba8f0a33a6b7624bf27b36,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760728987528751507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wpqvv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0931beb6-db13-496a-b1b5-5332521ec41e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23dfbfcb1861333c02a4df8399a281e5373e9e08cd281ae79a7298d55a4ad0e9,PodSandboxId:761bea054314e59b63653ecc4b394a808eebf8ec38e02cb636520e2b40fbf1ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760728986835296636,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shv79,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e44318e-d78f-486d-a353-0a93475a7e24,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae5174a921cc0e224f41eccf93177c0e1b128a989a02694514c5947131522dd5,PodSandboxId:9cbc9dcd2ba494216c5f4bc1c4701be97669561394bcf7c88f94564fb8430cb6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760728974853149975,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-322722,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8a50361a580b064fcc3c8b39a29bf68,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"ho
stPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855441592d227af5caf46a3da7cad48bfcf559d0fbd870e55efbc8d46b65d2b5,PodSandboxId:2b7f0348b4ea0a984ade9617a1fb5255b54f59733ca2c246ce5a6d8f4fd4c533,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760728974839398865,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-322722,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7227217edc4a0ef672ffcd64911706aa,},Annotations:map[string]str
ing{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59edec2aae40e770671276d501e4f9f8da8d8e171bdf306d44167cd656bc723d,PodSandboxId:7d31d357b80c06c81968d6a9584cf9517e72a0ce76af51808fd677e0d43215a0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760728974845380051,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-3
22722,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab9751e6e6de237aec6ecb4f862eb778,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1580db0b1cfb1233d77efe9ef7d822429cefca9b5cd4922ccc4b5b7ee013180,PodSandboxId:098a1f74122bc0dcce32c94f5342798c1b2bc2b90152d3328301376a268df983,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760728974864348590,Labels:map[
string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-322722,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cb16faed1188f7af949f558b90f162d,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f4b1faf1-cb75-48a1-99c8-36e6dfbc7f5d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	995362c034f5b       docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22                              2 minutes ago       Running             nginx                     0                   eff9115520439       nginx
	ae027d4de331f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   caf401d5a728a       busybox
	5ddf01b0f7484       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd             4 minutes ago       Running             controller                0                   b4deb8ca1500b       ingress-nginx-controller-675c5ddd98-jxfph
	c9e95028cf73d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   4 minutes ago       Exited              patch                     0                   14d88c757e67e       ingress-nginx-admission-patch-lw94z
	a4f323f875632       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   4 minutes ago       Exited              create                    0                   42015c6e60fb9       ingress-nginx-admission-create-28kvz
	bdd84d572e738       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb            4 minutes ago       Running             gadget                    0                   e750b38270f77       gadget-gxw4w
	dc1294b144c75       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   8175cd16511ba       kube-ingress-dns-minikube
	9162e73ec2ae8       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     5 minutes ago       Running             amd-gpu-device-plugin     0                   c040c4a7f4cf7       amd-gpu-device-plugin-r9jff
	8f4939150cb20       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   2f0f91a36d162       storage-provisioner
	aeb7e68e07e3a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             5 minutes ago       Running             coredns                   0                   72d93c2167a95       coredns-66bc5c9577-wpqvv
	23dfbfcb18613       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             5 minutes ago       Running             kube-proxy                0                   761bea054314e       kube-proxy-shv79
	d1580db0b1cfb       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             5 minutes ago       Running             kube-scheduler            0                   098a1f74122bc       kube-scheduler-addons-322722
	ae5174a921cc0       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             5 minutes ago       Running             etcd                      0                   9cbc9dcd2ba49       etcd-addons-322722
	59edec2aae40e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             5 minutes ago       Running             kube-controller-manager   0                   7d31d357b80c0       kube-controller-manager-addons-322722
	855441592d227       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             5 minutes ago       Running             kube-apiserver            0                   2b7f0348b4ea0       kube-apiserver-addons-322722
	
	
	==> coredns [aeb7e68e07e3ac9745704841470c1e2f63619196e7c89681b8469e4dc90d03a7] <==
	[INFO] 10.244.0.8:54574 - 27311 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000735345s
	[INFO] 10.244.0.8:54574 - 20958 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000102031s
	[INFO] 10.244.0.8:54574 - 8323 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000113548s
	[INFO] 10.244.0.8:54574 - 61879 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000152036s
	[INFO] 10.244.0.8:54574 - 32462 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000121894s
	[INFO] 10.244.0.8:54574 - 24432 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000906009s
	[INFO] 10.244.0.8:54574 - 23704 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000930139s
	[INFO] 10.244.0.8:38834 - 57913 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000237066s
	[INFO] 10.244.0.8:38834 - 57608 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000388281s
	[INFO] 10.244.0.8:42715 - 3479 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000180009s
	[INFO] 10.244.0.8:42715 - 3725 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000088353s
	[INFO] 10.244.0.8:53938 - 4459 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000072465s
	[INFO] 10.244.0.8:53938 - 4689 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000118562s
	[INFO] 10.244.0.8:49450 - 16938 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000198334s
	[INFO] 10.244.0.8:49450 - 17156 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000251429s
	[INFO] 10.244.0.23:56449 - 21171 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000579473s
	[INFO] 10.244.0.23:46068 - 7322 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000100289s
	[INFO] 10.244.0.23:58831 - 38225 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000260502s
	[INFO] 10.244.0.23:34769 - 28358 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000074642s
	[INFO] 10.244.0.23:46938 - 18084 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000086186s
	[INFO] 10.244.0.23:44135 - 19412 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000080324s
	[INFO] 10.244.0.23:33360 - 10720 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.004345232s
	[INFO] 10.244.0.23:37857 - 64427 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.005313479s
	[INFO] 10.244.0.27:45328 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000613383s
	[INFO] 10.244.0.27:53607 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000101828s
	
	
	==> describe nodes <==
	Name:               addons-322722
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-322722
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=addons-322722
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T19_23_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-322722
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:22:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-322722
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:28:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:26:35 +0000   Fri, 17 Oct 2025 19:22:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:26:35 +0000   Fri, 17 Oct 2025 19:22:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:26:35 +0000   Fri, 17 Oct 2025 19:22:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:26:35 +0000   Fri, 17 Oct 2025 19:23:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.86
	  Hostname:    addons-322722
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	System Info:
	  Machine ID:                 a300d917ae064f7eb56b46341932225f
	  System UUID:                a300d917-ae06-4f7e-b56b-46341932225f
	  Boot ID:                    7786ec67-2be9-4283-94cf-b00d7bb61378
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	  default                     hello-world-app-5d498dc89-95v9r              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  gadget                      gadget-gxw4w                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m37s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-jxfph    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m36s
	  kube-system                 amd-gpu-device-plugin-r9jff                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m41s
	  kube-system                 coredns-66bc5c9577-wpqvv                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m43s
	  kube-system                 etcd-addons-322722                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m51s
	  kube-system                 kube-apiserver-addons-322722                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m49s
	  kube-system                 kube-controller-manager-addons-322722        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m49s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m38s
	  kube-system                 kube-proxy-shv79                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m44s
	  kube-system                 kube-scheduler-addons-322722                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m51s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m41s  kube-proxy       
	  Normal  Starting                 5m49s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m49s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m48s  kubelet          Node addons-322722 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m48s  kubelet          Node addons-322722 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m48s  kubelet          Node addons-322722 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m48s  kubelet          Node addons-322722 status is now: NodeReady
	  Normal  RegisteredNode           5m45s  node-controller  Node addons-322722 event: Registered Node addons-322722 in Controller
	
	
	==> dmesg <==
	[  +6.590410] kauditd_printk_skb: 5 callbacks suppressed
	[  +9.403121] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.710156] kauditd_printk_skb: 26 callbacks suppressed
	[Oct17 19:24] kauditd_printk_skb: 17 callbacks suppressed
	[ +10.045185] kauditd_printk_skb: 20 callbacks suppressed
	[  +2.963441] kauditd_printk_skb: 61 callbacks suppressed
	[  +3.013231] kauditd_printk_skb: 109 callbacks suppressed
	[  +1.991481] kauditd_printk_skb: 92 callbacks suppressed
	[  +3.673225] kauditd_printk_skb: 51 callbacks suppressed
	[  +9.578051] kauditd_printk_skb: 35 callbacks suppressed
	[Oct17 19:25] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.097685] kauditd_printk_skb: 41 callbacks suppressed
	[  +3.512635] kauditd_printk_skb: 32 callbacks suppressed
	[ +10.559980] kauditd_printk_skb: 5 callbacks suppressed
	[Oct17 19:26] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.665380] kauditd_printk_skb: 38 callbacks suppressed
	[  +3.921762] kauditd_printk_skb: 120 callbacks suppressed
	[  +1.901957] kauditd_printk_skb: 155 callbacks suppressed
	[  +0.159472] kauditd_printk_skb: 189 callbacks suppressed
	[  +6.733141] kauditd_printk_skb: 97 callbacks suppressed
	[  +8.052735] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.000054] kauditd_printk_skb: 10 callbacks suppressed
	[  +0.017382] kauditd_printk_skb: 50 callbacks suppressed
	[  +0.833485] kauditd_printk_skb: 57 callbacks suppressed
	[Oct17 19:28] kauditd_printk_skb: 71 callbacks suppressed
	
	
	==> etcd [ae5174a921cc0e224f41eccf93177c0e1b128a989a02694514c5947131522dd5] <==
	{"level":"info","ts":"2025-10-17T19:24:27.131334Z","caller":"traceutil/trace.go:172","msg":"trace[1607381712] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1094; }","duration":"112.481305ms","start":"2025-10-17T19:24:27.018847Z","end":"2025-10-17T19:24:27.131328Z","steps":["trace[1607381712] 'agreement among raft nodes before linearized reading'  (duration: 112.442596ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:24:28.662057Z","caller":"traceutil/trace.go:172","msg":"trace[1976699321] linearizableReadLoop","detail":"{readStateIndex:1134; appliedIndex:1134; }","duration":"143.080402ms","start":"2025-10-17T19:24:28.518931Z","end":"2025-10-17T19:24:28.662011Z","steps":["trace[1976699321] 'read index received'  (duration: 143.072845ms)","trace[1976699321] 'applied index is now lower than readState.Index'  (duration: 6.507µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-17T19:24:28.662783Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"143.833282ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/gadget-cluster-role\" limit:1 ","response":"range_response_count:1 size:1925"}
	{"level":"info","ts":"2025-10-17T19:24:28.663149Z","caller":"traceutil/trace.go:172","msg":"trace[1071321413] range","detail":"{range_begin:/registry/clusterroles/gadget-cluster-role; range_end:; response_count:1; response_revision:1097; }","duration":"144.211554ms","start":"2025-10-17T19:24:28.518926Z","end":"2025-10-17T19:24:28.663138Z","steps":["trace[1071321413] 'agreement among raft nodes before linearized reading'  (duration: 143.202427ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:24:28.663730Z","caller":"traceutil/trace.go:172","msg":"trace[1874957129] transaction","detail":"{read_only:false; response_revision:1098; number_of_response:1; }","duration":"278.095732ms","start":"2025-10-17T19:24:28.385622Z","end":"2025-10-17T19:24:28.663717Z","steps":["trace[1874957129] 'process raft request'  (duration: 276.877728ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T19:24:28.670237Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"150.478357ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-17T19:24:28.670286Z","caller":"traceutil/trace.go:172","msg":"trace[56535229] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1098; }","duration":"150.536778ms","start":"2025-10-17T19:24:28.519741Z","end":"2025-10-17T19:24:28.670277Z","steps":["trace[56535229] 'agreement among raft nodes before linearized reading'  (duration: 148.26202ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:24:28.672283Z","caller":"traceutil/trace.go:172","msg":"trace[174816036] transaction","detail":"{read_only:false; response_revision:1099; number_of_response:1; }","duration":"183.106923ms","start":"2025-10-17T19:24:28.489167Z","end":"2025-10-17T19:24:28.672274Z","steps":["trace[174816036] 'process raft request'  (duration: 178.892811ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:24:30.116963Z","caller":"traceutil/trace.go:172","msg":"trace[1562804184] transaction","detail":"{read_only:false; response_revision:1108; number_of_response:1; }","duration":"112.366646ms","start":"2025-10-17T19:24:30.004576Z","end":"2025-10-17T19:24:30.116943Z","steps":["trace[1562804184] 'process raft request'  (duration: 110.600123ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:24:35.327320Z","caller":"traceutil/trace.go:172","msg":"trace[1404014252] transaction","detail":"{read_only:false; response_revision:1148; number_of_response:1; }","duration":"124.1797ms","start":"2025-10-17T19:24:35.203126Z","end":"2025-10-17T19:24:35.327306Z","steps":["trace[1404014252] 'process raft request'  (duration: 124.060969ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:24:43.570325Z","caller":"traceutil/trace.go:172","msg":"trace[352145481] transaction","detail":"{read_only:false; response_revision:1169; number_of_response:1; }","duration":"125.263223ms","start":"2025-10-17T19:24:43.445049Z","end":"2025-10-17T19:24:43.570313Z","steps":["trace[352145481] 'process raft request'  (duration: 125.115731ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:26:06.401311Z","caller":"traceutil/trace.go:172","msg":"trace[1747250996] transaction","detail":"{read_only:false; response_revision:1414; number_of_response:1; }","duration":"194.096712ms","start":"2025-10-17T19:26:06.207140Z","end":"2025-10-17T19:26:06.401237Z","steps":["trace[1747250996] 'process raft request'  (duration: 193.393621ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:26:08.182536Z","caller":"traceutil/trace.go:172","msg":"trace[2014639776] linearizableReadLoop","detail":"{readStateIndex:1506; appliedIndex:1506; }","duration":"180.452304ms","start":"2025-10-17T19:26:08.002068Z","end":"2025-10-17T19:26:08.182520Z","steps":["trace[2014639776] 'read index received'  (duration: 180.443983ms)","trace[2014639776] 'applied index is now lower than readState.Index'  (duration: 4.131µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-17T19:26:08.182776Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"180.654422ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-17T19:26:08.182805Z","caller":"traceutil/trace.go:172","msg":"trace[375492091] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1446; }","duration":"180.733276ms","start":"2025-10-17T19:26:08.002062Z","end":"2025-10-17T19:26:08.182796Z","steps":["trace[375492091] 'agreement among raft nodes before linearized reading'  (duration: 180.631932ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T19:26:08.183615Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"148.060701ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-17T19:26:08.183780Z","caller":"traceutil/trace.go:172","msg":"trace[1599655474] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1447; }","duration":"148.236252ms","start":"2025-10-17T19:26:08.035475Z","end":"2025-10-17T19:26:08.183711Z","steps":["trace[1599655474] 'agreement among raft nodes before linearized reading'  (duration: 148.042337ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:26:08.187487Z","caller":"traceutil/trace.go:172","msg":"trace[2016260200] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1447; }","duration":"200.911883ms","start":"2025-10-17T19:26:07.986290Z","end":"2025-10-17T19:26:08.187202Z","steps":["trace[2016260200] 'process raft request'  (duration: 197.099392ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T19:26:13.553648Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"265.363293ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-17T19:26:13.553715Z","caller":"traceutil/trace.go:172","msg":"trace[1472778941] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1488; }","duration":"265.442607ms","start":"2025-10-17T19:26:13.288259Z","end":"2025-10-17T19:26:13.553701Z","steps":["trace[1472778941] 'range keys from in-memory index tree'  (duration: 265.294168ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:26:32.138399Z","caller":"traceutil/trace.go:172","msg":"trace[965639389] linearizableReadLoop","detail":"{readStateIndex:1754; appliedIndex:1754; }","duration":"135.542367ms","start":"2025-10-17T19:26:32.002834Z","end":"2025-10-17T19:26:32.138377Z","steps":["trace[965639389] 'read index received'  (duration: 135.533102ms)","trace[965639389] 'applied index is now lower than readState.Index'  (duration: 7.872µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-17T19:26:32.138622Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"135.7869ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-17T19:26:32.138662Z","caller":"traceutil/trace.go:172","msg":"trace[1736912751] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1679; }","duration":"135.845273ms","start":"2025-10-17T19:26:32.002808Z","end":"2025-10-17T19:26:32.138653Z","steps":["trace[1736912751] 'agreement among raft nodes before linearized reading'  (duration: 135.712933ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:26:32.138971Z","caller":"traceutil/trace.go:172","msg":"trace[816845154] transaction","detail":"{read_only:false; response_revision:1680; number_of_response:1; }","duration":"275.505655ms","start":"2025-10-17T19:26:31.863458Z","end":"2025-10-17T19:26:32.138964Z","steps":["trace[816845154] 'process raft request'  (duration: 275.398348ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:26:32.152535Z","caller":"traceutil/trace.go:172","msg":"trace[60649703] transaction","detail":"{read_only:false; response_revision:1681; number_of_response:1; }","duration":"179.460337ms","start":"2025-10-17T19:26:31.973061Z","end":"2025-10-17T19:26:32.152521Z","steps":["trace[60649703] 'process raft request'  (duration: 179.382148ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:28:49 up 6 min,  0 users,  load average: 0.49, 1.33, 0.75
	Linux addons-322722 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [855441592d227af5caf46a3da7cad48bfcf559d0fbd870e55efbc8d46b65d2b5] <==
	E1017 19:23:59.515210       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.211.98:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.211.98:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.211.98:443: connect: connection refused" logger="UnhandledError"
	E1017 19:23:59.537937       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.211.98:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.211.98:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.211.98:443: connect: connection refused" logger="UnhandledError"
	I1017 19:23:59.725255       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1017 19:25:52.487942       1 conn.go:339] Error on socket receive: read tcp 192.168.39.86:8443->192.168.39.1:33728: use of closed network connection
	E1017 19:25:52.676908       1 conn.go:339] Error on socket receive: read tcp 192.168.39.86:8443->192.168.39.1:56610: use of closed network connection
	I1017 19:26:01.952984       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.142.108"}
	I1017 19:26:21.508973       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1017 19:26:21.686015       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.60.30"}
	E1017 19:26:38.236982       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1017 19:26:39.300329       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1017 19:26:57.162768       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1017 19:26:57.162847       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1017 19:26:57.197834       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1017 19:26:57.197895       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1017 19:26:57.202057       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1017 19:26:57.202217       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1017 19:26:57.230297       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1017 19:26:57.230401       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1017 19:26:57.262736       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1017 19:26:57.267042       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1017 19:26:58.202701       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1017 19:26:58.267654       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1017 19:26:58.396878       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1017 19:27:00.527399       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1017 19:28:48.076167       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.79.137"}
	
	
	==> kube-controller-manager [59edec2aae40e770671276d501e4f9f8da8d8e171bdf306d44167cd656bc723d] <==
	I1017 19:27:05.029864       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1017 19:27:05.926603       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1017 19:27:05.928111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1017 19:27:06.685599       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1017 19:27:06.686650       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1017 19:27:07.866693       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1017 19:27:07.868157       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1017 19:27:13.752919       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1017 19:27:13.754265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1017 19:27:13.844118       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1017 19:27:13.845108       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1017 19:27:15.876932       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1017 19:27:15.878335       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1017 19:27:31.911202       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1017 19:27:31.912528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1017 19:27:32.386247       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1017 19:27:32.387257       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1017 19:27:36.539742       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1017 19:27:36.541193       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1017 19:28:01.378016       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1017 19:28:01.379189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1017 19:28:21.417172       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1017 19:28:21.418235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1017 19:28:25.750611       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1017 19:28:25.751628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [23dfbfcb1861333c02a4df8399a281e5373e9e08cd281ae79a7298d55a4ad0e9] <==
	I1017 19:23:07.803646       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 19:23:07.903958       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 19:23:07.904029       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.86"]
	E1017 19:23:07.904143       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 19:23:08.159208       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1017 19:23:08.159257       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1017 19:23:08.159290       1 server_linux.go:132] "Using iptables Proxier"
	I1017 19:23:08.213310       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 19:23:08.214338       1 server.go:527] "Version info" version="v1.34.1"
	I1017 19:23:08.214369       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:23:08.235050       1 config.go:200] "Starting service config controller"
	I1017 19:23:08.235080       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 19:23:08.237489       1 config.go:106] "Starting endpoint slice config controller"
	I1017 19:23:08.237502       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 19:23:08.237671       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 19:23:08.237677       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 19:23:08.240034       1 config.go:309] "Starting node config controller"
	I1017 19:23:08.240060       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 19:23:08.240071       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 19:23:08.335580       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 19:23:08.337778       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1017 19:23:08.337785       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [d1580db0b1cfb1233d77efe9ef7d822429cefca9b5cd4922ccc4b5b7ee013180] <==
	I1017 19:22:58.337708       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 19:22:58.337898       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1017 19:22:58.342172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1017 19:22:58.344513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 19:22:58.344709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 19:22:58.344782       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 19:22:58.344891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 19:22:58.344943       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 19:22:58.345010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 19:22:58.345066       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 19:22:58.345241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 19:22:58.345473       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 19:22:58.345591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 19:22:58.345700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 19:22:58.345817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 19:22:58.345869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 19:22:58.345930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 19:22:58.346194       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 19:22:58.346228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 19:22:58.346277       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 19:22:58.346389       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 19:22:59.180125       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 19:22:59.183015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 19:22:59.189613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1017 19:22:59.637855       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 19:27:09 addons-322722 kubelet[1518]: I1017 19:27:09.855206    1518 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 17 19:27:11 addons-322722 kubelet[1518]: E1017 19:27:11.045720    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760729231043628381  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 17 19:27:11 addons-322722 kubelet[1518]: E1017 19:27:11.045760    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760729231043628381  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 17 19:27:21 addons-322722 kubelet[1518]: E1017 19:27:21.048303    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760729241047727061  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 17 19:27:21 addons-322722 kubelet[1518]: E1017 19:27:21.048328    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760729241047727061  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 17 19:27:31 addons-322722 kubelet[1518]: E1017 19:27:31.053400    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760729251052630770  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 17 19:27:31 addons-322722 kubelet[1518]: E1017 19:27:31.053441    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760729251052630770  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 17 19:27:41 addons-322722 kubelet[1518]: E1017 19:27:41.056026    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760729261055624837  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 17 19:27:41 addons-322722 kubelet[1518]: E1017 19:27:41.056053    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760729261055624837  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 17 19:27:51 addons-322722 kubelet[1518]: E1017 19:27:51.061337    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760729271059801422  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 17 19:27:51 addons-322722 kubelet[1518]: E1017 19:27:51.061358    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760729271059801422  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 17 19:27:55 addons-322722 kubelet[1518]: I1017 19:27:55.854974    1518 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-66bc5c9577-wpqvv" secret="" err="secret \"gcp-auth\" not found"
	Oct 17 19:28:01 addons-322722 kubelet[1518]: E1017 19:28:01.064329    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760729281064025348  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 17 19:28:01 addons-322722 kubelet[1518]: E1017 19:28:01.064363    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760729281064025348  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 17 19:28:11 addons-322722 kubelet[1518]: E1017 19:28:11.067519    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760729291067163496  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 17 19:28:11 addons-322722 kubelet[1518]: E1017 19:28:11.067613    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760729291067163496  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 17 19:28:20 addons-322722 kubelet[1518]: I1017 19:28:20.855624    1518 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-r9jff" secret="" err="secret \"gcp-auth\" not found"
	Oct 17 19:28:21 addons-322722 kubelet[1518]: E1017 19:28:21.070652    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760729301070175061  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 17 19:28:21 addons-322722 kubelet[1518]: E1017 19:28:21.070688    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760729301070175061  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 17 19:28:21 addons-322722 kubelet[1518]: I1017 19:28:21.855234    1518 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 17 19:28:31 addons-322722 kubelet[1518]: E1017 19:28:31.073194    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760729311072778388  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 17 19:28:31 addons-322722 kubelet[1518]: E1017 19:28:31.073225    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760729311072778388  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 17 19:28:41 addons-322722 kubelet[1518]: E1017 19:28:41.076116    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760729321075805108  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 17 19:28:41 addons-322722 kubelet[1518]: E1017 19:28:41.076137    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760729321075805108  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 17 19:28:48 addons-322722 kubelet[1518]: I1017 19:28:48.099725    1518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr2lf\" (UniqueName: \"kubernetes.io/projected/f3eb0b9a-4e0f-43c8-9d86-7388efff15f2-kube-api-access-xr2lf\") pod \"hello-world-app-5d498dc89-95v9r\" (UID: \"f3eb0b9a-4e0f-43c8-9d86-7388efff15f2\") " pod="default/hello-world-app-5d498dc89-95v9r"
	
	
	==> storage-provisioner [8f4939150cb20db2911faeb634ad75560402c504421cc7b0546a324eb3468519] <==
	W1017 19:28:25.355762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:28:27.359822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:28:27.365119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:28:29.367917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:28:29.372751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:28:31.377258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:28:31.382220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:28:33.386135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:28:33.391705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:28:35.395475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:28:35.402648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:28:37.407030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:28:37.412474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:28:39.416485       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:28:39.425450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:28:41.429709       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:28:41.434820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:28:43.439158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:28:43.446720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:28:45.450342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:28:45.455496       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:28:47.459272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:28:47.465312       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:28:49.472653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:28:49.480242       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-322722 -n addons-322722
helpers_test.go:269: (dbg) Run:  kubectl --context addons-322722 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-95v9r ingress-nginx-admission-create-28kvz ingress-nginx-admission-patch-lw94z
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-322722 describe pod hello-world-app-5d498dc89-95v9r ingress-nginx-admission-create-28kvz ingress-nginx-admission-patch-lw94z
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-322722 describe pod hello-world-app-5d498dc89-95v9r ingress-nginx-admission-create-28kvz ingress-nginx-admission-patch-lw94z: exit status 1 (68.585394ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-95v9r
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-322722/192.168.39.86
	Start Time:       Fri, 17 Oct 2025 19:28:48 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xr2lf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-xr2lf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-95v9r to addons-322722
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-28kvz" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-lw94z" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-322722 describe pod hello-world-app-5d498dc89-95v9r ingress-nginx-admission-create-28kvz ingress-nginx-admission-patch-lw94z: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-322722 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-322722 addons disable ingress-dns --alsologtostderr -v=1: (1.376060407s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-322722 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-322722 addons disable ingress --alsologtostderr -v=1: (7.778437057s)
--- FAIL: TestAddons/parallel/Ingress (158.63s)

                                                
                                    
x
+
TestPreload (131.99s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-451716 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-451716 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0: (1m6.126749282s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-451716 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-451716 image pull gcr.io/k8s-minikube/busybox: (3.628804845s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-451716
E1017 20:14:55.767971  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/functional-993605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-451716: (6.91720043s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-451716 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1017 20:15:41.364690  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-451716 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (52.229217062s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-451716 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-10-17 20:15:51.588237143 +0000 UTC m=+3248.693048350
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-451716 -n test-preload-451716
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-451716 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-451716 logs -n 25: (1.158790032s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                        ARGS                                                                                         │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-395053 ssh -n multinode-395053-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-395053     │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:02 UTC │
	│ ssh     │ multinode-395053 ssh -n multinode-395053 sudo cat /home/docker/cp-test_multinode-395053-m03_multinode-395053.txt                                                                    │ multinode-395053     │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:02 UTC │
	│ cp      │ multinode-395053 cp multinode-395053-m03:/home/docker/cp-test.txt multinode-395053-m02:/home/docker/cp-test_multinode-395053-m03_multinode-395053-m02.txt                           │ multinode-395053     │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:02 UTC │
	│ ssh     │ multinode-395053 ssh -n multinode-395053-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-395053     │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:02 UTC │
	│ ssh     │ multinode-395053 ssh -n multinode-395053-m02 sudo cat /home/docker/cp-test_multinode-395053-m03_multinode-395053-m02.txt                                                            │ multinode-395053     │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:02 UTC │
	│ node    │ multinode-395053 node stop m03                                                                                                                                                      │ multinode-395053     │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:02 UTC │
	│ node    │ multinode-395053 node start m03 -v=5 --alsologtostderr                                                                                                                              │ multinode-395053     │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:03 UTC │
	│ node    │ list -p multinode-395053                                                                                                                                                            │ multinode-395053     │ jenkins │ v1.37.0 │ 17 Oct 25 20:03 UTC │                     │
	│ stop    │ -p multinode-395053                                                                                                                                                                 │ multinode-395053     │ jenkins │ v1.37.0 │ 17 Oct 25 20:03 UTC │ 17 Oct 25 20:06 UTC │
	│ start   │ -p multinode-395053 --wait=true -v=5 --alsologtostderr                                                                                                                              │ multinode-395053     │ jenkins │ v1.37.0 │ 17 Oct 25 20:06 UTC │ 17 Oct 25 20:08 UTC │
	│ node    │ list -p multinode-395053                                                                                                                                                            │ multinode-395053     │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │                     │
	│ node    │ multinode-395053 node delete m03                                                                                                                                                    │ multinode-395053     │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ stop    │ multinode-395053 stop                                                                                                                                                               │ multinode-395053     │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:11 UTC │
	│ start   │ -p multinode-395053 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                          │ multinode-395053     │ jenkins │ v1.37.0 │ 17 Oct 25 20:11 UTC │ 17 Oct 25 20:13 UTC │
	│ node    │ list -p multinode-395053                                                                                                                                                            │ multinode-395053     │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │                     │
	│ start   │ -p multinode-395053-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-395053-m02 │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │                     │
	│ start   │ -p multinode-395053-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-395053-m03 │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ node    │ add -p multinode-395053                                                                                                                                                             │ multinode-395053     │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │                     │
	│ delete  │ -p multinode-395053-m03                                                                                                                                                             │ multinode-395053-m03 │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ delete  │ -p multinode-395053                                                                                                                                                                 │ multinode-395053     │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:13 UTC │
	│ start   │ -p test-preload-451716 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0 │ test-preload-451716  │ jenkins │ v1.37.0 │ 17 Oct 25 20:13 UTC │ 17 Oct 25 20:14 UTC │
	│ image   │ test-preload-451716 image pull gcr.io/k8s-minikube/busybox                                                                                                                          │ test-preload-451716  │ jenkins │ v1.37.0 │ 17 Oct 25 20:14 UTC │ 17 Oct 25 20:14 UTC │
	│ stop    │ -p test-preload-451716                                                                                                                                                              │ test-preload-451716  │ jenkins │ v1.37.0 │ 17 Oct 25 20:14 UTC │ 17 Oct 25 20:14 UTC │
	│ start   │ -p test-preload-451716 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                         │ test-preload-451716  │ jenkins │ v1.37.0 │ 17 Oct 25 20:14 UTC │ 17 Oct 25 20:15 UTC │
	│ image   │ test-preload-451716 image list                                                                                                                                                      │ test-preload-451716  │ jenkins │ v1.37.0 │ 17 Oct 25 20:15 UTC │ 17 Oct 25 20:15 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:14:59
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:14:59.184228  143758 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:14:59.184486  143758 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:14:59.184495  143758 out.go:374] Setting ErrFile to fd 2...
	I1017 20:14:59.184500  143758 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:14:59.184744  143758 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-109682/.minikube/bin
	I1017 20:14:59.185221  143758 out.go:368] Setting JSON to false
	I1017 20:14:59.186052  143758 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7040,"bootTime":1760725059,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 20:14:59.186149  143758 start.go:141] virtualization: kvm guest
	I1017 20:14:59.188211  143758 out.go:179] * [test-preload-451716] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 20:14:59.189799  143758 notify.go:220] Checking for updates...
	I1017 20:14:59.189823  143758 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 20:14:59.191193  143758 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:14:59.192492  143758 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-109682/kubeconfig
	I1017 20:14:59.193732  143758 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-109682/.minikube
	I1017 20:14:59.195079  143758 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 20:14:59.196524  143758 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:14:59.198164  143758 config.go:182] Loaded profile config "test-preload-451716": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1017 20:14:59.198570  143758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 20:14:59.198638  143758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 20:14:59.212974  143758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37781
	I1017 20:14:59.213521  143758 main.go:141] libmachine: () Calling .GetVersion
	I1017 20:14:59.214152  143758 main.go:141] libmachine: Using API Version  1
	I1017 20:14:59.214178  143758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 20:14:59.214540  143758 main.go:141] libmachine: () Calling .GetMachineName
	I1017 20:14:59.214724  143758 main.go:141] libmachine: (test-preload-451716) Calling .DriverName
	I1017 20:14:59.216603  143758 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1017 20:14:59.217834  143758 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:14:59.218153  143758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 20:14:59.218194  143758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 20:14:59.231628  143758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46805
	I1017 20:14:59.232085  143758 main.go:141] libmachine: () Calling .GetVersion
	I1017 20:14:59.232495  143758 main.go:141] libmachine: Using API Version  1
	I1017 20:14:59.232520  143758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 20:14:59.232947  143758 main.go:141] libmachine: () Calling .GetMachineName
	I1017 20:14:59.233142  143758 main.go:141] libmachine: (test-preload-451716) Calling .DriverName
	I1017 20:14:59.269700  143758 out.go:179] * Using the kvm2 driver based on existing profile
	I1017 20:14:59.270876  143758 start.go:305] selected driver: kvm2
	I1017 20:14:59.270898  143758 start.go:925] validating driver "kvm2" against &{Name:test-preload-451716 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-451716 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:14:59.270988  143758 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:14:59.271654  143758 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:14:59.271751  143758 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21664-109682/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1017 20:14:59.286545  143758 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1017 20:14:59.286593  143758 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21664-109682/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1017 20:14:59.301200  143758 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1017 20:14:59.301729  143758 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:14:59.301779  143758 cni.go:84] Creating CNI manager for ""
	I1017 20:14:59.301838  143758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1017 20:14:59.301939  143758 start.go:349] cluster config:
	{Name:test-preload-451716 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-451716 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:14:59.302082  143758 iso.go:125] acquiring lock: {Name:mk2487fdd858c1cb489b6312535f031f58d5b643 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:14:59.304571  143758 out.go:179] * Starting "test-preload-451716" primary control-plane node in "test-preload-451716" cluster
	I1017 20:14:59.305928  143758 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1017 20:15:00.187355  143758 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1017 20:15:00.187401  143758 cache.go:58] Caching tarball of preloaded images
	I1017 20:15:00.187567  143758 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1017 20:15:00.189257  143758 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1017 20:15:00.190621  143758 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1017 20:15:00.302509  143758 preload.go:290] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1017 20:15:00.302564  143758 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21664-109682/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1017 20:15:10.155701  143758 cache.go:61] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1017 20:15:10.155874  143758 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/test-preload-451716/config.json ...
	I1017 20:15:10.156124  143758 start.go:360] acquireMachinesLock for test-preload-451716: {Name:mkcde7cc25d2fb2130f0f72f7c9bd6675341a268 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1017 20:15:10.156199  143758 start.go:364] duration metric: took 49.875µs to acquireMachinesLock for "test-preload-451716"
	I1017 20:15:10.156225  143758 start.go:96] Skipping create...Using existing machine configuration
	I1017 20:15:10.156234  143758 fix.go:54] fixHost starting: 
	I1017 20:15:10.156501  143758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 20:15:10.156530  143758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 20:15:10.169981  143758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38671
	I1017 20:15:10.170543  143758 main.go:141] libmachine: () Calling .GetVersion
	I1017 20:15:10.171254  143758 main.go:141] libmachine: Using API Version  1
	I1017 20:15:10.171287  143758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 20:15:10.171640  143758 main.go:141] libmachine: () Calling .GetMachineName
	I1017 20:15:10.171872  143758 main.go:141] libmachine: (test-preload-451716) Calling .DriverName
	I1017 20:15:10.172067  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetState
	I1017 20:15:10.174898  143758 fix.go:112] recreateIfNeeded on test-preload-451716: state=Stopped err=<nil>
	I1017 20:15:10.174933  143758 main.go:141] libmachine: (test-preload-451716) Calling .DriverName
	W1017 20:15:10.175105  143758 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 20:15:10.177250  143758 out.go:252] * Restarting existing kvm2 VM for "test-preload-451716" ...
	I1017 20:15:10.177280  143758 main.go:141] libmachine: (test-preload-451716) Calling .Start
	I1017 20:15:10.177490  143758 main.go:141] libmachine: (test-preload-451716) starting domain...
	I1017 20:15:10.177517  143758 main.go:141] libmachine: (test-preload-451716) ensuring networks are active...
	I1017 20:15:10.178233  143758 main.go:141] libmachine: (test-preload-451716) Ensuring network default is active
	I1017 20:15:10.178573  143758 main.go:141] libmachine: (test-preload-451716) Ensuring network mk-test-preload-451716 is active
	I1017 20:15:10.179116  143758 main.go:141] libmachine: (test-preload-451716) getting domain XML...
	I1017 20:15:10.180180  143758 main.go:141] libmachine: (test-preload-451716) DBG | starting domain XML:
	I1017 20:15:10.180201  143758 main.go:141] libmachine: (test-preload-451716) DBG | <domain type='kvm'>
	I1017 20:15:10.180209  143758 main.go:141] libmachine: (test-preload-451716) DBG |   <name>test-preload-451716</name>
	I1017 20:15:10.180215  143758 main.go:141] libmachine: (test-preload-451716) DBG |   <uuid>b8ec4931-545f-4705-8b1b-fcdf0b743dc1</uuid>
	I1017 20:15:10.180223  143758 main.go:141] libmachine: (test-preload-451716) DBG |   <memory unit='KiB'>3145728</memory>
	I1017 20:15:10.180228  143758 main.go:141] libmachine: (test-preload-451716) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1017 20:15:10.180238  143758 main.go:141] libmachine: (test-preload-451716) DBG |   <vcpu placement='static'>2</vcpu>
	I1017 20:15:10.180242  143758 main.go:141] libmachine: (test-preload-451716) DBG |   <os>
	I1017 20:15:10.180251  143758 main.go:141] libmachine: (test-preload-451716) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1017 20:15:10.180259  143758 main.go:141] libmachine: (test-preload-451716) DBG |     <boot dev='cdrom'/>
	I1017 20:15:10.180269  143758 main.go:141] libmachine: (test-preload-451716) DBG |     <boot dev='hd'/>
	I1017 20:15:10.180285  143758 main.go:141] libmachine: (test-preload-451716) DBG |     <bootmenu enable='no'/>
	I1017 20:15:10.180291  143758 main.go:141] libmachine: (test-preload-451716) DBG |   </os>
	I1017 20:15:10.180295  143758 main.go:141] libmachine: (test-preload-451716) DBG |   <features>
	I1017 20:15:10.180305  143758 main.go:141] libmachine: (test-preload-451716) DBG |     <acpi/>
	I1017 20:15:10.180310  143758 main.go:141] libmachine: (test-preload-451716) DBG |     <apic/>
	I1017 20:15:10.180317  143758 main.go:141] libmachine: (test-preload-451716) DBG |     <pae/>
	I1017 20:15:10.180321  143758 main.go:141] libmachine: (test-preload-451716) DBG |   </features>
	I1017 20:15:10.180328  143758 main.go:141] libmachine: (test-preload-451716) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1017 20:15:10.180332  143758 main.go:141] libmachine: (test-preload-451716) DBG |   <clock offset='utc'/>
	I1017 20:15:10.180337  143758 main.go:141] libmachine: (test-preload-451716) DBG |   <on_poweroff>destroy</on_poweroff>
	I1017 20:15:10.180347  143758 main.go:141] libmachine: (test-preload-451716) DBG |   <on_reboot>restart</on_reboot>
	I1017 20:15:10.180372  143758 main.go:141] libmachine: (test-preload-451716) DBG |   <on_crash>destroy</on_crash>
	I1017 20:15:10.180390  143758 main.go:141] libmachine: (test-preload-451716) DBG |   <devices>
	I1017 20:15:10.180405  143758 main.go:141] libmachine: (test-preload-451716) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1017 20:15:10.180416  143758 main.go:141] libmachine: (test-preload-451716) DBG |     <disk type='file' device='cdrom'>
	I1017 20:15:10.180426  143758 main.go:141] libmachine: (test-preload-451716) DBG |       <driver name='qemu' type='raw'/>
	I1017 20:15:10.180440  143758 main.go:141] libmachine: (test-preload-451716) DBG |       <source file='/home/jenkins/minikube-integration/21664-109682/.minikube/machines/test-preload-451716/boot2docker.iso'/>
	I1017 20:15:10.180454  143758 main.go:141] libmachine: (test-preload-451716) DBG |       <target dev='hdc' bus='scsi'/>
	I1017 20:15:10.180465  143758 main.go:141] libmachine: (test-preload-451716) DBG |       <readonly/>
	I1017 20:15:10.180479  143758 main.go:141] libmachine: (test-preload-451716) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1017 20:15:10.180489  143758 main.go:141] libmachine: (test-preload-451716) DBG |     </disk>
	I1017 20:15:10.180501  143758 main.go:141] libmachine: (test-preload-451716) DBG |     <disk type='file' device='disk'>
	I1017 20:15:10.180516  143758 main.go:141] libmachine: (test-preload-451716) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1017 20:15:10.180534  143758 main.go:141] libmachine: (test-preload-451716) DBG |       <source file='/home/jenkins/minikube-integration/21664-109682/.minikube/machines/test-preload-451716/test-preload-451716.rawdisk'/>
	I1017 20:15:10.180546  143758 main.go:141] libmachine: (test-preload-451716) DBG |       <target dev='hda' bus='virtio'/>
	I1017 20:15:10.180559  143758 main.go:141] libmachine: (test-preload-451716) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1017 20:15:10.180569  143758 main.go:141] libmachine: (test-preload-451716) DBG |     </disk>
	I1017 20:15:10.180587  143758 main.go:141] libmachine: (test-preload-451716) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1017 20:15:10.180604  143758 main.go:141] libmachine: (test-preload-451716) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1017 20:15:10.180610  143758 main.go:141] libmachine: (test-preload-451716) DBG |     </controller>
	I1017 20:15:10.180615  143758 main.go:141] libmachine: (test-preload-451716) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1017 20:15:10.180627  143758 main.go:141] libmachine: (test-preload-451716) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1017 20:15:10.180637  143758 main.go:141] libmachine: (test-preload-451716) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1017 20:15:10.180650  143758 main.go:141] libmachine: (test-preload-451716) DBG |     </controller>
	I1017 20:15:10.180661  143758 main.go:141] libmachine: (test-preload-451716) DBG |     <interface type='network'>
	I1017 20:15:10.180671  143758 main.go:141] libmachine: (test-preload-451716) DBG |       <mac address='52:54:00:9c:f3:70'/>
	I1017 20:15:10.180683  143758 main.go:141] libmachine: (test-preload-451716) DBG |       <source network='mk-test-preload-451716'/>
	I1017 20:15:10.180695  143758 main.go:141] libmachine: (test-preload-451716) DBG |       <model type='virtio'/>
	I1017 20:15:10.180709  143758 main.go:141] libmachine: (test-preload-451716) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1017 20:15:10.180721  143758 main.go:141] libmachine: (test-preload-451716) DBG |     </interface>
	I1017 20:15:10.180732  143758 main.go:141] libmachine: (test-preload-451716) DBG |     <interface type='network'>
	I1017 20:15:10.180747  143758 main.go:141] libmachine: (test-preload-451716) DBG |       <mac address='52:54:00:86:73:92'/>
	I1017 20:15:10.180755  143758 main.go:141] libmachine: (test-preload-451716) DBG |       <source network='default'/>
	I1017 20:15:10.180766  143758 main.go:141] libmachine: (test-preload-451716) DBG |       <model type='virtio'/>
	I1017 20:15:10.180780  143758 main.go:141] libmachine: (test-preload-451716) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1017 20:15:10.180789  143758 main.go:141] libmachine: (test-preload-451716) DBG |     </interface>
	I1017 20:15:10.180795  143758 main.go:141] libmachine: (test-preload-451716) DBG |     <serial type='pty'>
	I1017 20:15:10.180808  143758 main.go:141] libmachine: (test-preload-451716) DBG |       <target type='isa-serial' port='0'>
	I1017 20:15:10.180819  143758 main.go:141] libmachine: (test-preload-451716) DBG |         <model name='isa-serial'/>
	I1017 20:15:10.180828  143758 main.go:141] libmachine: (test-preload-451716) DBG |       </target>
	I1017 20:15:10.180834  143758 main.go:141] libmachine: (test-preload-451716) DBG |     </serial>
	I1017 20:15:10.180862  143758 main.go:141] libmachine: (test-preload-451716) DBG |     <console type='pty'>
	I1017 20:15:10.180886  143758 main.go:141] libmachine: (test-preload-451716) DBG |       <target type='serial' port='0'/>
	I1017 20:15:10.180896  143758 main.go:141] libmachine: (test-preload-451716) DBG |     </console>
	I1017 20:15:10.180905  143758 main.go:141] libmachine: (test-preload-451716) DBG |     <input type='mouse' bus='ps2'/>
	I1017 20:15:10.180922  143758 main.go:141] libmachine: (test-preload-451716) DBG |     <input type='keyboard' bus='ps2'/>
	I1017 20:15:10.180935  143758 main.go:141] libmachine: (test-preload-451716) DBG |     <audio id='1' type='none'/>
	I1017 20:15:10.180946  143758 main.go:141] libmachine: (test-preload-451716) DBG |     <memballoon model='virtio'>
	I1017 20:15:10.180955  143758 main.go:141] libmachine: (test-preload-451716) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1017 20:15:10.180962  143758 main.go:141] libmachine: (test-preload-451716) DBG |     </memballoon>
	I1017 20:15:10.180967  143758 main.go:141] libmachine: (test-preload-451716) DBG |     <rng model='virtio'>
	I1017 20:15:10.180977  143758 main.go:141] libmachine: (test-preload-451716) DBG |       <backend model='random'>/dev/random</backend>
	I1017 20:15:10.180989  143758 main.go:141] libmachine: (test-preload-451716) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1017 20:15:10.181003  143758 main.go:141] libmachine: (test-preload-451716) DBG |     </rng>
	I1017 20:15:10.181012  143758 main.go:141] libmachine: (test-preload-451716) DBG |   </devices>
	I1017 20:15:10.181022  143758 main.go:141] libmachine: (test-preload-451716) DBG | </domain>
	I1017 20:15:10.181033  143758 main.go:141] libmachine: (test-preload-451716) DBG | 
	I1017 20:15:11.464293  143758 main.go:141] libmachine: (test-preload-451716) waiting for domain to start...
	I1017 20:15:11.465479  143758 main.go:141] libmachine: (test-preload-451716) domain is now running
	I1017 20:15:11.465501  143758 main.go:141] libmachine: (test-preload-451716) waiting for IP...
	I1017 20:15:11.466501  143758 main.go:141] libmachine: (test-preload-451716) DBG | domain test-preload-451716 has defined MAC address 52:54:00:9c:f3:70 in network mk-test-preload-451716
	I1017 20:15:11.467051  143758 main.go:141] libmachine: (test-preload-451716) DBG | domain test-preload-451716 has current primary IP address 192.168.39.41 and MAC address 52:54:00:9c:f3:70 in network mk-test-preload-451716
	I1017 20:15:11.467088  143758 main.go:141] libmachine: (test-preload-451716) found domain IP: 192.168.39.41
	I1017 20:15:11.467108  143758 main.go:141] libmachine: (test-preload-451716) reserving static IP address...
	I1017 20:15:11.467500  143758 main.go:141] libmachine: (test-preload-451716) DBG | found host DHCP lease matching {name: "test-preload-451716", mac: "52:54:00:9c:f3:70", ip: "192.168.39.41"} in network mk-test-preload-451716: {Iface:virbr1 ExpiryTime:2025-10-17 21:13:58 +0000 UTC Type:0 Mac:52:54:00:9c:f3:70 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:test-preload-451716 Clientid:01:52:54:00:9c:f3:70}
	I1017 20:15:11.467521  143758 main.go:141] libmachine: (test-preload-451716) DBG | skip adding static IP to network mk-test-preload-451716 - found existing host DHCP lease matching {name: "test-preload-451716", mac: "52:54:00:9c:f3:70", ip: "192.168.39.41"}
	I1017 20:15:11.467537  143758 main.go:141] libmachine: (test-preload-451716) reserved static IP address 192.168.39.41 for domain test-preload-451716
	I1017 20:15:11.467555  143758 main.go:141] libmachine: (test-preload-451716) waiting for SSH...
	I1017 20:15:11.467568  143758 main.go:141] libmachine: (test-preload-451716) DBG | Getting to WaitForSSH function...
	I1017 20:15:11.470275  143758 main.go:141] libmachine: (test-preload-451716) DBG | domain test-preload-451716 has defined MAC address 52:54:00:9c:f3:70 in network mk-test-preload-451716
	I1017 20:15:11.470641  143758 main.go:141] libmachine: (test-preload-451716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:f3:70", ip: ""} in network mk-test-preload-451716: {Iface:virbr1 ExpiryTime:2025-10-17 21:13:58 +0000 UTC Type:0 Mac:52:54:00:9c:f3:70 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:test-preload-451716 Clientid:01:52:54:00:9c:f3:70}
	I1017 20:15:11.470668  143758 main.go:141] libmachine: (test-preload-451716) DBG | domain test-preload-451716 has defined IP address 192.168.39.41 and MAC address 52:54:00:9c:f3:70 in network mk-test-preload-451716
	I1017 20:15:11.470826  143758 main.go:141] libmachine: (test-preload-451716) DBG | Using SSH client type: external
	I1017 20:15:11.470866  143758 main.go:141] libmachine: (test-preload-451716) DBG | Using SSH private key: /home/jenkins/minikube-integration/21664-109682/.minikube/machines/test-preload-451716/id_rsa (-rw-------)
	I1017 20:15:11.470929  143758 main.go:141] libmachine: (test-preload-451716) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.41 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21664-109682/.minikube/machines/test-preload-451716/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1017 20:15:11.470961  143758 main.go:141] libmachine: (test-preload-451716) DBG | About to run SSH command:
	I1017 20:15:11.470980  143758 main.go:141] libmachine: (test-preload-451716) DBG | exit 0
	I1017 20:15:21.722715  143758 main.go:141] libmachine: (test-preload-451716) DBG | SSH cmd err, output: exit status 255: 
	I1017 20:15:21.722750  143758 main.go:141] libmachine: (test-preload-451716) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1017 20:15:21.722770  143758 main.go:141] libmachine: (test-preload-451716) DBG | command : exit 0
	I1017 20:15:21.722778  143758 main.go:141] libmachine: (test-preload-451716) DBG | err     : exit status 255
	I1017 20:15:21.722791  143758 main.go:141] libmachine: (test-preload-451716) DBG | output  : 
	I1017 20:15:24.724802  143758 main.go:141] libmachine: (test-preload-451716) DBG | Getting to WaitForSSH function...
	I1017 20:15:24.728063  143758 main.go:141] libmachine: (test-preload-451716) DBG | domain test-preload-451716 has defined MAC address 52:54:00:9c:f3:70 in network mk-test-preload-451716
	I1017 20:15:24.728427  143758 main.go:141] libmachine: (test-preload-451716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:f3:70", ip: ""} in network mk-test-preload-451716: {Iface:virbr1 ExpiryTime:2025-10-17 21:15:21 +0000 UTC Type:0 Mac:52:54:00:9c:f3:70 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:test-preload-451716 Clientid:01:52:54:00:9c:f3:70}
	I1017 20:15:24.728466  143758 main.go:141] libmachine: (test-preload-451716) DBG | domain test-preload-451716 has defined IP address 192.168.39.41 and MAC address 52:54:00:9c:f3:70 in network mk-test-preload-451716
	I1017 20:15:24.728698  143758 main.go:141] libmachine: (test-preload-451716) DBG | Using SSH client type: external
	I1017 20:15:24.728735  143758 main.go:141] libmachine: (test-preload-451716) DBG | Using SSH private key: /home/jenkins/minikube-integration/21664-109682/.minikube/machines/test-preload-451716/id_rsa (-rw-------)
	I1017 20:15:24.728763  143758 main.go:141] libmachine: (test-preload-451716) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.41 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21664-109682/.minikube/machines/test-preload-451716/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1017 20:15:24.728777  143758 main.go:141] libmachine: (test-preload-451716) DBG | About to run SSH command:
	I1017 20:15:24.728795  143758 main.go:141] libmachine: (test-preload-451716) DBG | exit 0
	I1017 20:15:24.863729  143758 main.go:141] libmachine: (test-preload-451716) DBG | SSH cmd err, output: <nil>: 
	I1017 20:15:24.864104  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetConfigRaw
	I1017 20:15:24.864768  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetIP
	I1017 20:15:24.867673  143758 main.go:141] libmachine: (test-preload-451716) DBG | domain test-preload-451716 has defined MAC address 52:54:00:9c:f3:70 in network mk-test-preload-451716
	I1017 20:15:24.868163  143758 main.go:141] libmachine: (test-preload-451716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:f3:70", ip: ""} in network mk-test-preload-451716: {Iface:virbr1 ExpiryTime:2025-10-17 21:15:21 +0000 UTC Type:0 Mac:52:54:00:9c:f3:70 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:test-preload-451716 Clientid:01:52:54:00:9c:f3:70}
	I1017 20:15:24.868195  143758 main.go:141] libmachine: (test-preload-451716) DBG | domain test-preload-451716 has defined IP address 192.168.39.41 and MAC address 52:54:00:9c:f3:70 in network mk-test-preload-451716
	I1017 20:15:24.868495  143758 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/test-preload-451716/config.json ...
	I1017 20:15:24.868780  143758 machine.go:93] provisionDockerMachine start ...
	I1017 20:15:24.868805  143758 main.go:141] libmachine: (test-preload-451716) Calling .DriverName
	I1017 20:15:24.869069  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHHostname
	I1017 20:15:24.872031  143758 main.go:141] libmachine: (test-preload-451716) DBG | domain test-preload-451716 has defined MAC address 52:54:00:9c:f3:70 in network mk-test-preload-451716
	I1017 20:15:24.872575  143758 main.go:141] libmachine: (test-preload-451716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:f3:70", ip: ""} in network mk-test-preload-451716: {Iface:virbr1 ExpiryTime:2025-10-17 21:15:21 +0000 UTC Type:0 Mac:52:54:00:9c:f3:70 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:test-preload-451716 Clientid:01:52:54:00:9c:f3:70}
	I1017 20:15:24.872606  143758 main.go:141] libmachine: (test-preload-451716) DBG | domain test-preload-451716 has defined IP address 192.168.39.41 and MAC address 52:54:00:9c:f3:70 in network mk-test-preload-451716
	I1017 20:15:24.872820  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHPort
	I1017 20:15:24.873071  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHKeyPath
	I1017 20:15:24.873273  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHKeyPath
	I1017 20:15:24.873476  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHUsername
	I1017 20:15:24.873706  143758 main.go:141] libmachine: Using SSH client type: native
	I1017 20:15:24.874189  143758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1017 20:15:24.874260  143758 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:15:24.990438  143758 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1017 20:15:24.990464  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetMachineName
	I1017 20:15:24.990753  143758 buildroot.go:166] provisioning hostname "test-preload-451716"
	I1017 20:15:24.990788  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetMachineName
	I1017 20:15:24.991012  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHHostname
	I1017 20:15:24.994224  143758 main.go:141] libmachine: (test-preload-451716) DBG | domain test-preload-451716 has defined MAC address 52:54:00:9c:f3:70 in network mk-test-preload-451716
	I1017 20:15:24.994691  143758 main.go:141] libmachine: (test-preload-451716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:f3:70", ip: ""} in network mk-test-preload-451716: {Iface:virbr1 ExpiryTime:2025-10-17 21:15:21 +0000 UTC Type:0 Mac:52:54:00:9c:f3:70 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:test-preload-451716 Clientid:01:52:54:00:9c:f3:70}
	I1017 20:15:24.994724  143758 main.go:141] libmachine: (test-preload-451716) DBG | domain test-preload-451716 has defined IP address 192.168.39.41 and MAC address 52:54:00:9c:f3:70 in network mk-test-preload-451716
	I1017 20:15:24.994871  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHPort
	I1017 20:15:24.995069  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHKeyPath
	I1017 20:15:24.995234  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHKeyPath
	I1017 20:15:24.995385  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHUsername
	I1017 20:15:24.995550  143758 main.go:141] libmachine: Using SSH client type: native
	I1017 20:15:24.995796  143758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1017 20:15:24.995810  143758 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-451716 && echo "test-preload-451716" | sudo tee /etc/hostname
	I1017 20:15:25.132466  143758 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-451716
	
	I1017 20:15:25.132495  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHHostname
	I1017 20:15:25.135614  143758 main.go:141] libmachine: (test-preload-451716) DBG | domain test-preload-451716 has defined MAC address 52:54:00:9c:f3:70 in network mk-test-preload-451716
	I1017 20:15:25.136015  143758 main.go:141] libmachine: (test-preload-451716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:f3:70", ip: ""} in network mk-test-preload-451716: {Iface:virbr1 ExpiryTime:2025-10-17 21:15:21 +0000 UTC Type:0 Mac:52:54:00:9c:f3:70 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:test-preload-451716 Clientid:01:52:54:00:9c:f3:70}
	I1017 20:15:25.136054  143758 main.go:141] libmachine: (test-preload-451716) DBG | domain test-preload-451716 has defined IP address 192.168.39.41 and MAC address 52:54:00:9c:f3:70 in network mk-test-preload-451716
	I1017 20:15:25.136229  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHPort
	I1017 20:15:25.136516  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHKeyPath
	I1017 20:15:25.136709  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHKeyPath
	I1017 20:15:25.136839  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHUsername
	I1017 20:15:25.137041  143758 main.go:141] libmachine: Using SSH client type: native
	I1017 20:15:25.137291  143758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1017 20:15:25.137309  143758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-451716' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-451716/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-451716' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:15:25.261811  143758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:15:25.261880  143758 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21664-109682/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-109682/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-109682/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-109682/.minikube}
	I1017 20:15:25.261964  143758 buildroot.go:174] setting up certificates
	I1017 20:15:25.261977  143758 provision.go:84] configureAuth start
	I1017 20:15:25.261992  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetMachineName
	I1017 20:15:25.262331  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetIP
	I1017 20:15:25.265513  143758 main.go:141] libmachine: (test-preload-451716) DBG | domain test-preload-451716 has defined MAC address 52:54:00:9c:f3:70 in network mk-test-preload-451716
	I1017 20:15:25.265904  143758 main.go:141] libmachine: (test-preload-451716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:f3:70", ip: ""} in network mk-test-preload-451716: {Iface:virbr1 ExpiryTime:2025-10-17 21:15:21 +0000 UTC Type:0 Mac:52:54:00:9c:f3:70 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:test-preload-451716 Clientid:01:52:54:00:9c:f3:70}
	I1017 20:15:25.265939  143758 main.go:141] libmachine: (test-preload-451716) DBG | domain test-preload-451716 has defined IP address 192.168.39.41 and MAC address 52:54:00:9c:f3:70 in network mk-test-preload-451716
	I1017 20:15:25.266179  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHHostname
	I1017 20:15:25.268845  143758 main.go:141] libmachine: (test-preload-451716) DBG | domain test-preload-451716 has defined MAC address 52:54:00:9c:f3:70 in network mk-test-preload-451716
	I1017 20:15:25.269289  143758 main.go:141] libmachine: (test-preload-451716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:f3:70", ip: ""} in network mk-test-preload-451716: {Iface:virbr1 ExpiryTime:2025-10-17 21:15:21 +0000 UTC Type:0 Mac:52:54:00:9c:f3:70 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:test-preload-451716 Clientid:01:52:54:00:9c:f3:70}
	I1017 20:15:25.269319  143758 main.go:141] libmachine: (test-preload-451716) DBG | domain test-preload-451716 has defined IP address 192.168.39.41 and MAC address 52:54:00:9c:f3:70 in network mk-test-preload-451716
	I1017 20:15:25.269473  143758 provision.go:143] copyHostCerts
	I1017 20:15:25.269526  143758 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-109682/.minikube/ca.pem, removing ...
	I1017 20:15:25.269545  143758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-109682/.minikube/ca.pem
	I1017 20:15:25.269618  143758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-109682/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-109682/.minikube/ca.pem (1082 bytes)
	I1017 20:15:25.269738  143758 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-109682/.minikube/cert.pem, removing ...
	I1017 20:15:25.269749  143758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-109682/.minikube/cert.pem
	I1017 20:15:25.269776  143758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-109682/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-109682/.minikube/cert.pem (1123 bytes)
	I1017 20:15:25.269836  143758 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-109682/.minikube/key.pem, removing ...
	I1017 20:15:25.269843  143758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-109682/.minikube/key.pem
	I1017 20:15:25.269917  143758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-109682/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-109682/.minikube/key.pem (1675 bytes)
	I1017 20:15:25.270011  143758 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-109682/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-109682/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-109682/.minikube/certs/ca-key.pem org=jenkins.test-preload-451716 san=[127.0.0.1 192.168.39.41 localhost minikube test-preload-451716]
	I1017 20:15:25.875633  143758 provision.go:177] copyRemoteCerts
	I1017 20:15:25.875711  143758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:15:25.875741  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHHostname
	I1017 20:15:25.878703  143758 main.go:141] libmachine: (test-preload-451716) DBG | domain test-preload-451716 has defined MAC address 52:54:00:9c:f3:70 in network mk-test-preload-451716
	I1017 20:15:25.879036  143758 main.go:141] libmachine: (test-preload-451716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:f3:70", ip: ""} in network mk-test-preload-451716: {Iface:virbr1 ExpiryTime:2025-10-17 21:15:21 +0000 UTC Type:0 Mac:52:54:00:9c:f3:70 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:test-preload-451716 Clientid:01:52:54:00:9c:f3:70}
	I1017 20:15:25.879070  143758 main.go:141] libmachine: (test-preload-451716) DBG | domain test-preload-451716 has defined IP address 192.168.39.41 and MAC address 52:54:00:9c:f3:70 in network mk-test-preload-451716
	I1017 20:15:25.879266  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHPort
	I1017 20:15:25.879472  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHKeyPath
	I1017 20:15:25.879600  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHUsername
	I1017 20:15:25.879714  143758 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/test-preload-451716/id_rsa Username:docker}
	I1017 20:15:25.968440  143758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-109682/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 20:15:25.996876  143758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-109682/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1017 20:15:26.026359  143758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-109682/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 20:15:26.055804  143758 provision.go:87] duration metric: took 793.808262ms to configureAuth
	I1017 20:15:26.055864  143758 buildroot.go:189] setting minikube options for container-runtime
	I1017 20:15:26.056051  143758 config.go:182] Loaded profile config "test-preload-451716": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1017 20:15:26.056135  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHHostname
	I1017 20:15:26.059411  143758 main.go:141] libmachine: (test-preload-451716) DBG | domain test-preload-451716 has defined MAC address 52:54:00:9c:f3:70 in network mk-test-preload-451716
	I1017 20:15:26.059811  143758 main.go:141] libmachine: (test-preload-451716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:f3:70", ip: ""} in network mk-test-preload-451716: {Iface:virbr1 ExpiryTime:2025-10-17 21:15:21 +0000 UTC Type:0 Mac:52:54:00:9c:f3:70 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:test-preload-451716 Clientid:01:52:54:00:9c:f3:70}
	I1017 20:15:26.059843  143758 main.go:141] libmachine: (test-preload-451716) DBG | domain test-preload-451716 has defined IP address 192.168.39.41 and MAC address 52:54:00:9c:f3:70 in network mk-test-preload-451716
	I1017 20:15:26.060049  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHPort
	I1017 20:15:26.060267  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHKeyPath
	I1017 20:15:26.060481  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHKeyPath
	I1017 20:15:26.060647  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHUsername
	I1017 20:15:26.060857  143758 main.go:141] libmachine: Using SSH client type: native
	I1017 20:15:26.061070  143758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1017 20:15:26.061091  143758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:15:26.314617  143758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:15:26.314652  143758 machine.go:96] duration metric: took 1.445855524s to provisionDockerMachine
	I1017 20:15:26.314685  143758 start.go:293] postStartSetup for "test-preload-451716" (driver="kvm2")
	I1017 20:15:26.314704  143758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:15:26.314752  143758 main.go:141] libmachine: (test-preload-451716) Calling .DriverName
	I1017 20:15:26.315130  143758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:15:26.315153  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHHostname
	I1017 20:15:26.318051  143758 main.go:141] libmachine: (test-preload-451716) DBG | domain test-preload-451716 has defined MAC address 52:54:00:9c:f3:70 in network mk-test-preload-451716
	I1017 20:15:26.318447  143758 main.go:141] libmachine: (test-preload-451716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:f3:70", ip: ""} in network mk-test-preload-451716: {Iface:virbr1 ExpiryTime:2025-10-17 21:15:21 +0000 UTC Type:0 Mac:52:54:00:9c:f3:70 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:test-preload-451716 Clientid:01:52:54:00:9c:f3:70}
	I1017 20:15:26.318479  143758 main.go:141] libmachine: (test-preload-451716) DBG | domain test-preload-451716 has defined IP address 192.168.39.41 and MAC address 52:54:00:9c:f3:70 in network mk-test-preload-451716
	I1017 20:15:26.318635  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHPort
	I1017 20:15:26.318841  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHKeyPath
	I1017 20:15:26.319013  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHUsername
	I1017 20:15:26.319158  143758 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/test-preload-451716/id_rsa Username:docker}
	I1017 20:15:26.406717  143758 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:15:26.411646  143758 info.go:137] Remote host: Buildroot 2025.02
	I1017 20:15:26.411683  143758 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-109682/.minikube/addons for local assets ...
	I1017 20:15:26.411798  143758 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-109682/.minikube/files for local assets ...
	I1017 20:15:26.411913  143758 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-109682/.minikube/files/etc/ssl/certs/1135922.pem -> 1135922.pem in /etc/ssl/certs
	I1017 20:15:26.412022  143758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:15:26.424047  143758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-109682/.minikube/files/etc/ssl/certs/1135922.pem --> /etc/ssl/certs/1135922.pem (1708 bytes)
	I1017 20:15:26.453822  143758 start.go:296] duration metric: took 139.112717ms for postStartSetup
	I1017 20:15:26.453895  143758 fix.go:56] duration metric: took 16.297660756s for fixHost
	I1017 20:15:26.453925  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHHostname
	I1017 20:15:26.456830  143758 main.go:141] libmachine: (test-preload-451716) DBG | domain test-preload-451716 has defined MAC address 52:54:00:9c:f3:70 in network mk-test-preload-451716
	I1017 20:15:26.457209  143758 main.go:141] libmachine: (test-preload-451716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:f3:70", ip: ""} in network mk-test-preload-451716: {Iface:virbr1 ExpiryTime:2025-10-17 21:15:21 +0000 UTC Type:0 Mac:52:54:00:9c:f3:70 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:test-preload-451716 Clientid:01:52:54:00:9c:f3:70}
	I1017 20:15:26.457241  143758 main.go:141] libmachine: (test-preload-451716) DBG | domain test-preload-451716 has defined IP address 192.168.39.41 and MAC address 52:54:00:9c:f3:70 in network mk-test-preload-451716
	I1017 20:15:26.457387  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHPort
	I1017 20:15:26.457617  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHKeyPath
	I1017 20:15:26.457794  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHKeyPath
	I1017 20:15:26.457969  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHUsername
	I1017 20:15:26.458153  143758 main.go:141] libmachine: Using SSH client type: native
	I1017 20:15:26.458348  143758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1017 20:15:26.458358  143758 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1017 20:15:26.573466  143758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760732126.547046312
	
	I1017 20:15:26.573493  143758 fix.go:216] guest clock: 1760732126.547046312
	I1017 20:15:26.573502  143758 fix.go:229] Guest: 2025-10-17 20:15:26.547046312 +0000 UTC Remote: 2025-10-17 20:15:26.453902004 +0000 UTC m=+27.307686248 (delta=93.144308ms)
	I1017 20:15:26.573521  143758 fix.go:200] guest clock delta is within tolerance: 93.144308ms
	I1017 20:15:26.573525  143758 start.go:83] releasing machines lock for "test-preload-451716", held for 16.417310578s
	I1017 20:15:26.573543  143758 main.go:141] libmachine: (test-preload-451716) Calling .DriverName
	I1017 20:15:26.573845  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetIP
	I1017 20:15:26.577169  143758 main.go:141] libmachine: (test-preload-451716) DBG | domain test-preload-451716 has defined MAC address 52:54:00:9c:f3:70 in network mk-test-preload-451716
	I1017 20:15:26.577539  143758 main.go:141] libmachine: (test-preload-451716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:f3:70", ip: ""} in network mk-test-preload-451716: {Iface:virbr1 ExpiryTime:2025-10-17 21:15:21 +0000 UTC Type:0 Mac:52:54:00:9c:f3:70 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:test-preload-451716 Clientid:01:52:54:00:9c:f3:70}
	I1017 20:15:26.577572  143758 main.go:141] libmachine: (test-preload-451716) DBG | domain test-preload-451716 has defined IP address 192.168.39.41 and MAC address 52:54:00:9c:f3:70 in network mk-test-preload-451716
	I1017 20:15:26.577766  143758 main.go:141] libmachine: (test-preload-451716) Calling .DriverName
	I1017 20:15:26.578344  143758 main.go:141] libmachine: (test-preload-451716) Calling .DriverName
	I1017 20:15:26.578554  143758 main.go:141] libmachine: (test-preload-451716) Calling .DriverName
	I1017 20:15:26.578664  143758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:15:26.578715  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHHostname
	I1017 20:15:26.578821  143758 ssh_runner.go:195] Run: cat /version.json
	I1017 20:15:26.578865  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHHostname
	I1017 20:15:26.582035  143758 main.go:141] libmachine: (test-preload-451716) DBG | domain test-preload-451716 has defined MAC address 52:54:00:9c:f3:70 in network mk-test-preload-451716
	I1017 20:15:26.582061  143758 main.go:141] libmachine: (test-preload-451716) DBG | domain test-preload-451716 has defined MAC address 52:54:00:9c:f3:70 in network mk-test-preload-451716
	I1017 20:15:26.582462  143758 main.go:141] libmachine: (test-preload-451716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:f3:70", ip: ""} in network mk-test-preload-451716: {Iface:virbr1 ExpiryTime:2025-10-17 21:15:21 +0000 UTC Type:0 Mac:52:54:00:9c:f3:70 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:test-preload-451716 Clientid:01:52:54:00:9c:f3:70}
	I1017 20:15:26.582491  143758 main.go:141] libmachine: (test-preload-451716) DBG | domain test-preload-451716 has defined IP address 192.168.39.41 and MAC address 52:54:00:9c:f3:70 in network mk-test-preload-451716
	I1017 20:15:26.582518  143758 main.go:141] libmachine: (test-preload-451716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:f3:70", ip: ""} in network mk-test-preload-451716: {Iface:virbr1 ExpiryTime:2025-10-17 21:15:21 +0000 UTC Type:0 Mac:52:54:00:9c:f3:70 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:test-preload-451716 Clientid:01:52:54:00:9c:f3:70}
	I1017 20:15:26.582537  143758 main.go:141] libmachine: (test-preload-451716) DBG | domain test-preload-451716 has defined IP address 192.168.39.41 and MAC address 52:54:00:9c:f3:70 in network mk-test-preload-451716
	I1017 20:15:26.582705  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHPort
	I1017 20:15:26.582873  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHPort
	I1017 20:15:26.583002  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHKeyPath
	I1017 20:15:26.583067  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHKeyPath
	I1017 20:15:26.583168  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHUsername
	I1017 20:15:26.583251  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHUsername
	I1017 20:15:26.583295  143758 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/test-preload-451716/id_rsa Username:docker}
	I1017 20:15:26.583455  143758 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/test-preload-451716/id_rsa Username:docker}
	I1017 20:15:26.664425  143758 ssh_runner.go:195] Run: systemctl --version
	I1017 20:15:26.702759  143758 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:15:26.847416  143758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:15:26.854334  143758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:15:26.854409  143758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:15:26.881907  143758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1017 20:15:26.881932  143758 start.go:495] detecting cgroup driver to use...
	I1017 20:15:26.881994  143758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:15:26.907433  143758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:15:26.927063  143758 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:15:26.927121  143758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:15:26.945949  143758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:15:26.962331  143758 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:15:27.109510  143758 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:15:27.322805  143758 docker.go:234] disabling docker service ...
	I1017 20:15:27.322887  143758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:15:27.339284  143758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:15:27.354448  143758 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:15:27.518350  143758 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:15:27.665807  143758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:15:27.681571  143758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:15:27.703485  143758 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1017 20:15:27.703558  143758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:15:27.715557  143758 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 20:15:27.715630  143758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:15:27.727915  143758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:15:27.740114  143758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:15:27.752798  143758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:15:27.766028  143758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:15:27.778230  143758 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:15:27.798152  143758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:15:27.810468  143758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:15:27.820842  143758 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1017 20:15:27.820915  143758 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1017 20:15:27.839834  143758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:15:27.852626  143758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:15:27.994237  143758 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:15:28.101058  143758 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:15:28.101137  143758 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:15:28.106771  143758 start.go:563] Will wait 60s for crictl version
	I1017 20:15:28.106833  143758 ssh_runner.go:195] Run: which crictl
	I1017 20:15:28.110899  143758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1017 20:15:28.149669  143758 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1017 20:15:28.149761  143758 ssh_runner.go:195] Run: crio --version
	I1017 20:15:28.178557  143758 ssh_runner.go:195] Run: crio --version
	I1017 20:15:28.209519  143758 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1017 20:15:28.210982  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetIP
	I1017 20:15:28.214594  143758 main.go:141] libmachine: (test-preload-451716) DBG | domain test-preload-451716 has defined MAC address 52:54:00:9c:f3:70 in network mk-test-preload-451716
	I1017 20:15:28.214993  143758 main.go:141] libmachine: (test-preload-451716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:f3:70", ip: ""} in network mk-test-preload-451716: {Iface:virbr1 ExpiryTime:2025-10-17 21:15:21 +0000 UTC Type:0 Mac:52:54:00:9c:f3:70 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:test-preload-451716 Clientid:01:52:54:00:9c:f3:70}
	I1017 20:15:28.215027  143758 main.go:141] libmachine: (test-preload-451716) DBG | domain test-preload-451716 has defined IP address 192.168.39.41 and MAC address 52:54:00:9c:f3:70 in network mk-test-preload-451716
	I1017 20:15:28.215314  143758 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1017 20:15:28.221422  143758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:15:28.237965  143758 kubeadm.go:883] updating cluster {Name:test-preload-451716 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-451716 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 20:15:28.238116  143758 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1017 20:15:28.238169  143758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:15:28.276719  143758 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1017 20:15:28.276790  143758 ssh_runner.go:195] Run: which lz4
	I1017 20:15:28.281162  143758 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1017 20:15:28.285999  143758 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1017 20:15:28.286033  143758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-109682/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1017 20:15:29.703557  143758 crio.go:462] duration metric: took 1.422432972s to copy over tarball
	I1017 20:15:29.703632  143758 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1017 20:15:31.417674  143758 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.714010773s)
	I1017 20:15:31.417713  143758 crio.go:469] duration metric: took 1.714127164s to extract the tarball
	I1017 20:15:31.417722  143758 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1017 20:15:31.464264  143758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:15:31.513982  143758 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:15:31.514010  143758 cache_images.go:85] Images are preloaded, skipping loading
	I1017 20:15:31.514018  143758 kubeadm.go:934] updating node { 192.168.39.41 8443 v1.32.0 crio true true} ...
	I1017 20:15:31.514112  143758 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-451716 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.41
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-451716 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:15:31.514179  143758 ssh_runner.go:195] Run: crio config
	I1017 20:15:31.559723  143758 cni.go:84] Creating CNI manager for ""
	I1017 20:15:31.559748  143758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1017 20:15:31.559771  143758 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 20:15:31.559794  143758 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.41 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-451716 NodeName:test-preload-451716 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.41"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.41 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 20:15:31.559941  143758 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.41
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-451716"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.41"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.41"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 20:15:31.560009  143758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1017 20:15:31.572702  143758 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:15:31.572771  143758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 20:15:31.584890  143758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1017 20:15:31.605455  143758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:15:31.625977  143758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1017 20:15:31.647572  143758 ssh_runner.go:195] Run: grep 192.168.39.41	control-plane.minikube.internal$ /etc/hosts
	I1017 20:15:31.651822  143758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.41	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:15:31.666327  143758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:15:31.815325  143758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:15:31.844975  143758 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/test-preload-451716 for IP: 192.168.39.41
	I1017 20:15:31.845001  143758 certs.go:195] generating shared ca certs ...
	I1017 20:15:31.845040  143758 certs.go:227] acquiring lock for ca certs: {Name:mk1628109f16dfe58c75b776fa21265e79b64c50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:15:31.845219  143758 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-109682/.minikube/ca.key
	I1017 20:15:31.845290  143758 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-109682/.minikube/proxy-client-ca.key
	I1017 20:15:31.845304  143758 certs.go:257] generating profile certs ...
	I1017 20:15:31.845408  143758 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/test-preload-451716/client.key
	I1017 20:15:31.845490  143758 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/test-preload-451716/apiserver.key.318cae19
	I1017 20:15:31.845542  143758 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/test-preload-451716/proxy-client.key
	I1017 20:15:31.845690  143758 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-109682/.minikube/certs/113592.pem (1338 bytes)
	W1017 20:15:31.845732  143758 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-109682/.minikube/certs/113592_empty.pem, impossibly tiny 0 bytes
	I1017 20:15:31.845745  143758 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-109682/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 20:15:31.845776  143758 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-109682/.minikube/certs/ca.pem (1082 bytes)
	I1017 20:15:31.845811  143758 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-109682/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:15:31.845840  143758 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-109682/.minikube/certs/key.pem (1675 bytes)
	I1017 20:15:31.845927  143758 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-109682/.minikube/files/etc/ssl/certs/1135922.pem (1708 bytes)
	I1017 20:15:31.846908  143758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-109682/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:15:31.879815  143758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-109682/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 20:15:31.913424  143758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-109682/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:15:31.946242  143758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-109682/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 20:15:31.976352  143758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/test-preload-451716/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1017 20:15:32.006636  143758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/test-preload-451716/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 20:15:32.036464  143758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/test-preload-451716/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:15:32.068082  143758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/test-preload-451716/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 20:15:32.099282  143758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-109682/.minikube/certs/113592.pem --> /usr/share/ca-certificates/113592.pem (1338 bytes)
	I1017 20:15:32.128427  143758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-109682/.minikube/files/etc/ssl/certs/1135922.pem --> /usr/share/ca-certificates/1135922.pem (1708 bytes)
	I1017 20:15:32.157225  143758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-109682/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:15:32.186291  143758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 20:15:32.207421  143758 ssh_runner.go:195] Run: openssl version
	I1017 20:15:32.213641  143758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:15:32.226807  143758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:15:32.231999  143758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:22 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:15:32.232068  143758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:15:32.239137  143758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:15:32.252104  143758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/113592.pem && ln -fs /usr/share/ca-certificates/113592.pem /etc/ssl/certs/113592.pem"
	I1017 20:15:32.265316  143758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113592.pem
	I1017 20:15:32.270917  143758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:32 /usr/share/ca-certificates/113592.pem
	I1017 20:15:32.270986  143758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113592.pem
	I1017 20:15:32.278341  143758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/113592.pem /etc/ssl/certs/51391683.0"
	I1017 20:15:32.291600  143758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1135922.pem && ln -fs /usr/share/ca-certificates/1135922.pem /etc/ssl/certs/1135922.pem"
	I1017 20:15:32.306426  143758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1135922.pem
	I1017 20:15:32.311697  143758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:32 /usr/share/ca-certificates/1135922.pem
	I1017 20:15:32.311772  143758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1135922.pem
	I1017 20:15:32.319220  143758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1135922.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:15:32.333031  143758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:15:32.338681  143758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 20:15:32.346373  143758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 20:15:32.354037  143758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 20:15:32.362031  143758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 20:15:32.369785  143758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 20:15:32.377417  143758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 20:15:32.385404  143758 kubeadm.go:400] StartCluster: {Name:test-preload-451716 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-451716 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:15:32.385494  143758 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:15:32.385584  143758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:15:32.433294  143758 cri.go:89] found id: ""
	I1017 20:15:32.433369  143758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 20:15:32.446234  143758 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 20:15:32.446254  143758 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 20:15:32.446297  143758 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 20:15:32.458394  143758 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 20:15:32.458870  143758 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-451716" does not appear in /home/jenkins/minikube-integration/21664-109682/kubeconfig
	I1017 20:15:32.458995  143758 kubeconfig.go:62] /home/jenkins/minikube-integration/21664-109682/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-451716" cluster setting kubeconfig missing "test-preload-451716" context setting]
	I1017 20:15:32.459273  143758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-109682/kubeconfig: {Name:mk80b2133650ff16478c714743c00aa30ac700c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:15:32.459810  143758 kapi.go:59] client config for test-preload-451716: &rest.Config{Host:"https://192.168.39.41:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21664-109682/.minikube/profiles/test-preload-451716/client.crt", KeyFile:"/home/jenkins/minikube-integration/21664-109682/.minikube/profiles/test-preload-451716/client.key", CAFile:"/home/jenkins/minikube-integration/21664-109682/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819dc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1017 20:15:32.460273  143758 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1017 20:15:32.460289  143758 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1017 20:15:32.460293  143758 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1017 20:15:32.460296  143758 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1017 20:15:32.460299  143758 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1017 20:15:32.460641  143758 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 20:15:32.472555  143758 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.39.41
	I1017 20:15:32.472588  143758 kubeadm.go:1160] stopping kube-system containers ...
	I1017 20:15:32.472603  143758 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1017 20:15:32.472679  143758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:15:32.517779  143758 cri.go:89] found id: ""
	I1017 20:15:32.517863  143758 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1017 20:15:32.541315  143758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1017 20:15:32.553178  143758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1017 20:15:32.553196  143758 kubeadm.go:157] found existing configuration files:
	
	I1017 20:15:32.553242  143758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1017 20:15:32.564636  143758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1017 20:15:32.564704  143758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1017 20:15:32.576462  143758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1017 20:15:32.587198  143758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1017 20:15:32.587260  143758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1017 20:15:32.598823  143758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1017 20:15:32.609320  143758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1017 20:15:32.609379  143758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1017 20:15:32.620742  143758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1017 20:15:32.631214  143758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1017 20:15:32.631300  143758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1017 20:15:32.642728  143758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1017 20:15:32.654180  143758 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1017 20:15:32.712361  143758 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1017 20:15:33.684372  143758 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1017 20:15:33.944209  143758 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1017 20:15:34.016864  143758 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1017 20:15:34.110938  143758 api_server.go:52] waiting for apiserver process to appear ...
	I1017 20:15:34.111035  143758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:15:34.611369  143758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:15:35.111260  143758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:15:35.611984  143758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:15:36.111580  143758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:15:36.611524  143758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:15:36.642549  143758 api_server.go:72] duration metric: took 2.53162409s to wait for apiserver process to appear ...
	I1017 20:15:36.642578  143758 api_server.go:88] waiting for apiserver healthz status ...
	I1017 20:15:36.642597  143758 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1017 20:15:39.325352  143758 api_server.go:279] https://192.168.39.41:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1017 20:15:39.325396  143758 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1017 20:15:39.325417  143758 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1017 20:15:39.336373  143758 api_server.go:279] https://192.168.39.41:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1017 20:15:39.336406  143758 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1017 20:15:39.642885  143758 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1017 20:15:39.647785  143758 api_server.go:279] https://192.168.39.41:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1017 20:15:39.647812  143758 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1017 20:15:40.143591  143758 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1017 20:15:40.148834  143758 api_server.go:279] https://192.168.39.41:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1017 20:15:40.148876  143758 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1017 20:15:40.643087  143758 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1017 20:15:40.649510  143758 api_server.go:279] https://192.168.39.41:8443/healthz returned 200:
	ok
	I1017 20:15:40.659069  143758 api_server.go:141] control plane version: v1.32.0
	I1017 20:15:40.659101  143758 api_server.go:131] duration metric: took 4.016515824s to wait for apiserver health ...
	I1017 20:15:40.659112  143758 cni.go:84] Creating CNI manager for ""
	I1017 20:15:40.659120  143758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1017 20:15:40.661028  143758 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1017 20:15:40.662522  143758 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1017 20:15:40.681089  143758 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1017 20:15:40.723736  143758 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 20:15:40.730306  143758 system_pods.go:59] 7 kube-system pods found
	I1017 20:15:40.730358  143758 system_pods.go:61] "coredns-668d6bf9bc-rg2hx" [8dc0c4dd-408a-4e61-aea4-b380c18474fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:15:40.730368  143758 system_pods.go:61] "etcd-test-preload-451716" [525bfa4d-74a7-4f83-b75b-b183ea149f8b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 20:15:40.730382  143758 system_pods.go:61] "kube-apiserver-test-preload-451716" [3298f847-1588-42b4-8d8e-067deb37cec2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 20:15:40.730390  143758 system_pods.go:61] "kube-controller-manager-test-preload-451716" [d69ff22a-2f77-4776-8dc2-fa23d0ee4655] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 20:15:40.730396  143758 system_pods.go:61] "kube-proxy-hmwmf" [114544c6-c3b7-4099-b901-e830fdcd3b29] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 20:15:40.730409  143758 system_pods.go:61] "kube-scheduler-test-preload-451716" [895d550b-eef1-4390-883d-ff360ff5385b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 20:15:40.730417  143758 system_pods.go:61] "storage-provisioner" [7a6bb7aa-7ae2-4abd-b8b5-cd5117243cc6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 20:15:40.730424  143758 system_pods.go:74] duration metric: took 6.665219ms to wait for pod list to return data ...
	I1017 20:15:40.730432  143758 node_conditions.go:102] verifying NodePressure condition ...
	I1017 20:15:40.735860  143758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1017 20:15:40.735889  143758 node_conditions.go:123] node cpu capacity is 2
	I1017 20:15:40.735901  143758 node_conditions.go:105] duration metric: took 5.464152ms to run NodePressure ...
	I1017 20:15:40.735959  143758 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1017 20:15:41.055743  143758 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1017 20:15:41.060538  143758 kubeadm.go:743] kubelet initialised
	I1017 20:15:41.060575  143758 kubeadm.go:744] duration metric: took 4.798438ms waiting for restarted kubelet to initialise ...
	I1017 20:15:41.060598  143758 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1017 20:15:41.075502  143758 ops.go:34] apiserver oom_adj: -16
	I1017 20:15:41.075533  143758 kubeadm.go:601] duration metric: took 8.629272258s to restartPrimaryControlPlane
	I1017 20:15:41.075544  143758 kubeadm.go:402] duration metric: took 8.69015195s to StartCluster
	I1017 20:15:41.075565  143758 settings.go:142] acquiring lock: {Name:mkb7b59ea598dca0a5adfe4320f5bbb3feb2252c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:15:41.075667  143758 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-109682/kubeconfig
	I1017 20:15:41.076522  143758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-109682/kubeconfig: {Name:mk80b2133650ff16478c714743c00aa30ac700c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:15:41.076879  143758 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:15:41.076972  143758 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 20:15:41.077056  143758 config.go:182] Loaded profile config "test-preload-451716": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1017 20:15:41.077064  143758 addons.go:69] Setting storage-provisioner=true in profile "test-preload-451716"
	I1017 20:15:41.077085  143758 addons.go:69] Setting default-storageclass=true in profile "test-preload-451716"
	I1017 20:15:41.077107  143758 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-451716"
	I1017 20:15:41.077089  143758 addons.go:238] Setting addon storage-provisioner=true in "test-preload-451716"
	W1017 20:15:41.077189  143758 addons.go:247] addon storage-provisioner should already be in state true
	I1017 20:15:41.077231  143758 host.go:66] Checking if "test-preload-451716" exists ...
	I1017 20:15:41.077510  143758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 20:15:41.077540  143758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 20:15:41.077546  143758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 20:15:41.077578  143758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 20:15:41.079490  143758 out.go:179] * Verifying Kubernetes components...
	I1017 20:15:41.080988  143758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:15:41.092195  143758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37373
	I1017 20:15:41.092209  143758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38773
	I1017 20:15:41.092766  143758 main.go:141] libmachine: () Calling .GetVersion
	I1017 20:15:41.092822  143758 main.go:141] libmachine: () Calling .GetVersion
	I1017 20:15:41.093242  143758 main.go:141] libmachine: Using API Version  1
	I1017 20:15:41.093260  143758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 20:15:41.093378  143758 main.go:141] libmachine: Using API Version  1
	I1017 20:15:41.093397  143758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 20:15:41.093625  143758 main.go:141] libmachine: () Calling .GetMachineName
	I1017 20:15:41.093728  143758 main.go:141] libmachine: () Calling .GetMachineName
	I1017 20:15:41.093908  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetState
	I1017 20:15:41.094214  143758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 20:15:41.094245  143758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 20:15:41.096542  143758 kapi.go:59] client config for test-preload-451716: &rest.Config{Host:"https://192.168.39.41:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21664-109682/.minikube/profiles/test-preload-451716/client.crt", KeyFile:"/home/jenkins/minikube-integration/21664-109682/.minikube/profiles/test-preload-451716/client.key", CAFile:"/home/jenkins/minikube-integration/21664-109682/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819dc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1017 20:15:41.096946  143758 addons.go:238] Setting addon default-storageclass=true in "test-preload-451716"
	W1017 20:15:41.096972  143758 addons.go:247] addon default-storageclass should already be in state true
	I1017 20:15:41.097006  143758 host.go:66] Checking if "test-preload-451716" exists ...
	I1017 20:15:41.097399  143758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 20:15:41.097441  143758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 20:15:41.109095  143758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33071
	I1017 20:15:41.109779  143758 main.go:141] libmachine: () Calling .GetVersion
	I1017 20:15:41.110419  143758 main.go:141] libmachine: Using API Version  1
	I1017 20:15:41.110445  143758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 20:15:41.110863  143758 main.go:141] libmachine: () Calling .GetMachineName
	I1017 20:15:41.111065  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetState
	I1017 20:15:41.111487  143758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34129
	I1017 20:15:41.112070  143758 main.go:141] libmachine: () Calling .GetVersion
	I1017 20:15:41.112612  143758 main.go:141] libmachine: Using API Version  1
	I1017 20:15:41.112643  143758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 20:15:41.113076  143758 main.go:141] libmachine: () Calling .GetMachineName
	I1017 20:15:41.113329  143758 main.go:141] libmachine: (test-preload-451716) Calling .DriverName
	I1017 20:15:41.113673  143758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 20:15:41.113726  143758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 20:15:41.115501  143758 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 20:15:41.117077  143758 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:15:41.117102  143758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 20:15:41.117123  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHHostname
	I1017 20:15:41.120992  143758 main.go:141] libmachine: (test-preload-451716) DBG | domain test-preload-451716 has defined MAC address 52:54:00:9c:f3:70 in network mk-test-preload-451716
	I1017 20:15:41.121577  143758 main.go:141] libmachine: (test-preload-451716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:f3:70", ip: ""} in network mk-test-preload-451716: {Iface:virbr1 ExpiryTime:2025-10-17 21:15:21 +0000 UTC Type:0 Mac:52:54:00:9c:f3:70 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:test-preload-451716 Clientid:01:52:54:00:9c:f3:70}
	I1017 20:15:41.121602  143758 main.go:141] libmachine: (test-preload-451716) DBG | domain test-preload-451716 has defined IP address 192.168.39.41 and MAC address 52:54:00:9c:f3:70 in network mk-test-preload-451716
	I1017 20:15:41.121867  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHPort
	I1017 20:15:41.122076  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHKeyPath
	I1017 20:15:41.122295  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHUsername
	I1017 20:15:41.122430  143758 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/test-preload-451716/id_rsa Username:docker}
	I1017 20:15:41.129231  143758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43765
	I1017 20:15:41.129768  143758 main.go:141] libmachine: () Calling .GetVersion
	I1017 20:15:41.130287  143758 main.go:141] libmachine: Using API Version  1
	I1017 20:15:41.130316  143758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 20:15:41.130760  143758 main.go:141] libmachine: () Calling .GetMachineName
	I1017 20:15:41.131046  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetState
	I1017 20:15:41.133172  143758 main.go:141] libmachine: (test-preload-451716) Calling .DriverName
	I1017 20:15:41.133419  143758 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 20:15:41.133435  143758 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 20:15:41.133455  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHHostname
	I1017 20:15:41.136755  143758 main.go:141] libmachine: (test-preload-451716) DBG | domain test-preload-451716 has defined MAC address 52:54:00:9c:f3:70 in network mk-test-preload-451716
	I1017 20:15:41.137269  143758 main.go:141] libmachine: (test-preload-451716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:f3:70", ip: ""} in network mk-test-preload-451716: {Iface:virbr1 ExpiryTime:2025-10-17 21:15:21 +0000 UTC Type:0 Mac:52:54:00:9c:f3:70 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:test-preload-451716 Clientid:01:52:54:00:9c:f3:70}
	I1017 20:15:41.137299  143758 main.go:141] libmachine: (test-preload-451716) DBG | domain test-preload-451716 has defined IP address 192.168.39.41 and MAC address 52:54:00:9c:f3:70 in network mk-test-preload-451716
	I1017 20:15:41.137463  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHPort
	I1017 20:15:41.137655  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHKeyPath
	I1017 20:15:41.137888  143758 main.go:141] libmachine: (test-preload-451716) Calling .GetSSHUsername
	I1017 20:15:41.138098  143758 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/test-preload-451716/id_rsa Username:docker}
	I1017 20:15:41.389268  143758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:15:41.426273  143758 node_ready.go:35] waiting up to 6m0s for node "test-preload-451716" to be "Ready" ...
	I1017 20:15:41.432018  143758 node_ready.go:49] node "test-preload-451716" is "Ready"
	I1017 20:15:41.432071  143758 node_ready.go:38] duration metric: took 5.715264ms for node "test-preload-451716" to be "Ready" ...
	I1017 20:15:41.432091  143758 api_server.go:52] waiting for apiserver process to appear ...
	I1017 20:15:41.432151  143758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:15:41.470999  143758 api_server.go:72] duration metric: took 394.071089ms to wait for apiserver process to appear ...
	I1017 20:15:41.471026  143758 api_server.go:88] waiting for apiserver healthz status ...
	I1017 20:15:41.471049  143758 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1017 20:15:41.488098  143758 api_server.go:279] https://192.168.39.41:8443/healthz returned 200:
	ok
	I1017 20:15:41.491290  143758 api_server.go:141] control plane version: v1.32.0
	I1017 20:15:41.491328  143758 api_server.go:131] duration metric: took 20.292096ms to wait for apiserver health ...
	I1017 20:15:41.491342  143758 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 20:15:41.494270  143758 system_pods.go:59] 7 kube-system pods found
	I1017 20:15:41.494303  143758 system_pods.go:61] "coredns-668d6bf9bc-rg2hx" [8dc0c4dd-408a-4e61-aea4-b380c18474fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:15:41.494310  143758 system_pods.go:61] "etcd-test-preload-451716" [525bfa4d-74a7-4f83-b75b-b183ea149f8b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 20:15:41.494317  143758 system_pods.go:61] "kube-apiserver-test-preload-451716" [3298f847-1588-42b4-8d8e-067deb37cec2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 20:15:41.494323  143758 system_pods.go:61] "kube-controller-manager-test-preload-451716" [d69ff22a-2f77-4776-8dc2-fa23d0ee4655] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 20:15:41.494327  143758 system_pods.go:61] "kube-proxy-hmwmf" [114544c6-c3b7-4099-b901-e830fdcd3b29] Running
	I1017 20:15:41.494332  143758 system_pods.go:61] "kube-scheduler-test-preload-451716" [895d550b-eef1-4390-883d-ff360ff5385b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 20:15:41.494337  143758 system_pods.go:61] "storage-provisioner" [7a6bb7aa-7ae2-4abd-b8b5-cd5117243cc6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 20:15:41.494344  143758 system_pods.go:74] duration metric: took 2.994724ms to wait for pod list to return data ...
	I1017 20:15:41.494357  143758 default_sa.go:34] waiting for default service account to be created ...
	I1017 20:15:41.498877  143758 default_sa.go:45] found service account: "default"
	I1017 20:15:41.498905  143758 default_sa.go:55] duration metric: took 4.540358ms for default service account to be created ...
	I1017 20:15:41.498917  143758 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 20:15:41.504082  143758 system_pods.go:86] 7 kube-system pods found
	I1017 20:15:41.504111  143758 system_pods.go:89] "coredns-668d6bf9bc-rg2hx" [8dc0c4dd-408a-4e61-aea4-b380c18474fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:15:41.504118  143758 system_pods.go:89] "etcd-test-preload-451716" [525bfa4d-74a7-4f83-b75b-b183ea149f8b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 20:15:41.504126  143758 system_pods.go:89] "kube-apiserver-test-preload-451716" [3298f847-1588-42b4-8d8e-067deb37cec2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 20:15:41.504132  143758 system_pods.go:89] "kube-controller-manager-test-preload-451716" [d69ff22a-2f77-4776-8dc2-fa23d0ee4655] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 20:15:41.504136  143758 system_pods.go:89] "kube-proxy-hmwmf" [114544c6-c3b7-4099-b901-e830fdcd3b29] Running
	I1017 20:15:41.504144  143758 system_pods.go:89] "kube-scheduler-test-preload-451716" [895d550b-eef1-4390-883d-ff360ff5385b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 20:15:41.504150  143758 system_pods.go:89] "storage-provisioner" [7a6bb7aa-7ae2-4abd-b8b5-cd5117243cc6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 20:15:41.504161  143758 system_pods.go:126] duration metric: took 5.236116ms to wait for k8s-apps to be running ...
	I1017 20:15:41.504174  143758 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 20:15:41.504226  143758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:15:41.554916  143758 system_svc.go:56] duration metric: took 50.724529ms WaitForService to wait for kubelet
	I1017 20:15:41.554959  143758 kubeadm.go:586] duration metric: took 478.036833ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:15:41.554991  143758 node_conditions.go:102] verifying NodePressure condition ...
	I1017 20:15:41.559075  143758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:15:41.563882  143758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1017 20:15:41.563913  143758 node_conditions.go:123] node cpu capacity is 2
	I1017 20:15:41.563925  143758 node_conditions.go:105] duration metric: took 8.927242ms to run NodePressure ...
	I1017 20:15:41.563943  143758 start.go:241] waiting for startup goroutines ...
	I1017 20:15:41.594000  143758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 20:15:42.270699  143758 main.go:141] libmachine: Making call to close driver server
	I1017 20:15:42.270738  143758 main.go:141] libmachine: (test-preload-451716) Calling .Close
	I1017 20:15:42.270769  143758 main.go:141] libmachine: Making call to close driver server
	I1017 20:15:42.270791  143758 main.go:141] libmachine: (test-preload-451716) Calling .Close
	I1017 20:15:42.271102  143758 main.go:141] libmachine: Successfully made call to close driver server
	I1017 20:15:42.271118  143758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 20:15:42.271128  143758 main.go:141] libmachine: Making call to close driver server
	I1017 20:15:42.271136  143758 main.go:141] libmachine: (test-preload-451716) Calling .Close
	I1017 20:15:42.271214  143758 main.go:141] libmachine: (test-preload-451716) DBG | Closing plugin on server side
	I1017 20:15:42.271228  143758 main.go:141] libmachine: Successfully made call to close driver server
	I1017 20:15:42.271238  143758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 20:15:42.271245  143758 main.go:141] libmachine: Making call to close driver server
	I1017 20:15:42.271252  143758 main.go:141] libmachine: (test-preload-451716) Calling .Close
	I1017 20:15:42.271371  143758 main.go:141] libmachine: Successfully made call to close driver server
	I1017 20:15:42.271382  143758 main.go:141] libmachine: (test-preload-451716) DBG | Closing plugin on server side
	I1017 20:15:42.271389  143758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 20:15:42.271426  143758 main.go:141] libmachine: Successfully made call to close driver server
	I1017 20:15:42.271435  143758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 20:15:42.285673  143758 main.go:141] libmachine: Making call to close driver server
	I1017 20:15:42.285698  143758 main.go:141] libmachine: (test-preload-451716) Calling .Close
	I1017 20:15:42.286051  143758 main.go:141] libmachine: Successfully made call to close driver server
	I1017 20:15:42.286074  143758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1017 20:15:42.288828  143758 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1017 20:15:42.290138  143758 addons.go:514] duration metric: took 1.213166335s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1017 20:15:42.290184  143758 start.go:246] waiting for cluster config update ...
	I1017 20:15:42.290196  143758 start.go:255] writing updated cluster config ...
	I1017 20:15:42.290465  143758 ssh_runner.go:195] Run: rm -f paused
	I1017 20:15:42.296932  143758 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:15:42.297385  143758 kapi.go:59] client config for test-preload-451716: &rest.Config{Host:"https://192.168.39.41:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21664-109682/.minikube/profiles/test-preload-451716/client.crt", KeyFile:"/home/jenkins/minikube-integration/21664-109682/.minikube/profiles/test-preload-451716/client.key", CAFile:"/home/jenkins/minikube-integration/21664-109682/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819dc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1017 20:15:42.300593  143758 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-rg2hx" in "kube-system" namespace to be "Ready" or be gone ...
	W1017 20:15:44.308538  143758 pod_ready.go:104] pod "coredns-668d6bf9bc-rg2hx" is not "Ready", error: <nil>
	W1017 20:15:46.806697  143758 pod_ready.go:104] pod "coredns-668d6bf9bc-rg2hx" is not "Ready", error: <nil>
	I1017 20:15:48.315265  143758 pod_ready.go:94] pod "coredns-668d6bf9bc-rg2hx" is "Ready"
	I1017 20:15:48.315297  143758 pod_ready.go:86] duration metric: took 6.014665836s for pod "coredns-668d6bf9bc-rg2hx" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:15:48.318134  143758 pod_ready.go:83] waiting for pod "etcd-test-preload-451716" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:15:48.328057  143758 pod_ready.go:94] pod "etcd-test-preload-451716" is "Ready"
	I1017 20:15:48.328085  143758 pod_ready.go:86] duration metric: took 9.926442ms for pod "etcd-test-preload-451716" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:15:48.419042  143758 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-451716" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:15:48.423376  143758 pod_ready.go:94] pod "kube-apiserver-test-preload-451716" is "Ready"
	I1017 20:15:48.423401  143758 pod_ready.go:86] duration metric: took 4.332762ms for pod "kube-apiserver-test-preload-451716" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:15:48.425400  143758 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-451716" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:15:48.504025  143758 pod_ready.go:94] pod "kube-controller-manager-test-preload-451716" is "Ready"
	I1017 20:15:48.504062  143758 pod_ready.go:86] duration metric: took 78.638355ms for pod "kube-controller-manager-test-preload-451716" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:15:48.704572  143758 pod_ready.go:83] waiting for pod "kube-proxy-hmwmf" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:15:49.103828  143758 pod_ready.go:94] pod "kube-proxy-hmwmf" is "Ready"
	I1017 20:15:49.103868  143758 pod_ready.go:86] duration metric: took 399.260505ms for pod "kube-proxy-hmwmf" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:15:49.304475  143758 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-451716" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:15:51.310234  143758 pod_ready.go:94] pod "kube-scheduler-test-preload-451716" is "Ready"
	I1017 20:15:51.310264  143758 pod_ready.go:86] duration metric: took 2.005764038s for pod "kube-scheduler-test-preload-451716" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:15:51.310275  143758 pod_ready.go:40] duration metric: took 9.013308739s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:15:51.353343  143758 start.go:624] kubectl: 1.34.1, cluster: 1.32.0 (minor skew: 2)
	I1017 20:15:51.355035  143758 out.go:203] 
	W1017 20:15:51.356373  143758 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.32.0.
	I1017 20:15:51.357566  143758 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1017 20:15:51.358713  143758 out.go:179] * Done! kubectl is now configured to use "test-preload-451716" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 17 20:15:52 test-preload-451716 crio[838]: time="2025-10-17 20:15:52.296590451Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:cdef3f42382347d222502b4b0c398d13daab1d8bbab3848f9573c4507a76bcd0,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-rg2hx,Uid:8dc0c4dd-408a-4e61-aea4-b380c18474fc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760732143905518151,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-rg2hx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dc0c4dd-408a-4e61-aea4-b380c18474fc,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-17T20:15:40.034381599Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b46e65c1a2b3da0140fb92ce9a3e77b9bc117e99d54e8524ccda0c8807270705,Metadata:&PodSandboxMetadata{Name:kube-proxy-hmwmf,Uid:114544c6-c3b7-4099-b901-e830fdcd3b29,Namespace:kube-system,A
ttempt:0,},State:SANDBOX_READY,CreatedAt:1760732140359082436,Labels:map[string]string{controller-revision-hash: 64b9dbc74b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-hmwmf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 114544c6-c3b7-4099-b901-e830fdcd3b29,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-17T20:15:40.034376942Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a650d6bdec3f9c5314ed9dff166bc9f43c830978463d7565bce9fbb4ce2c8c98,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:7a6bb7aa-7ae2-4abd-b8b5-cd5117243cc6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760732140348752722,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a6bb7aa-7ae2-4abd-b8b5-cd51
17243cc6,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-10-17T20:15:40.034380311Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5a40dada1faf3eb6c37b399d82bdbd7db3094a573d03936917270040b74b5265,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-451716,Uid:4c1bc00c9147a46ad
4a6f20356536ea5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760732135897864254,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-451716,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c1bc00c9147a46ad4a6f20356536ea5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.41:2379,kubernetes.io/config.hash: 4c1bc00c9147a46ad4a6f20356536ea5,kubernetes.io/config.seen: 2025-10-17T20:15:34.089223864Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:53e4154c4319e85c425e07007e704f1181391e6711a67c16318125e8e4a2a93b,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-451716,Uid:0ab89ef46abc848e2155c500483fb6e2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760732135890262987,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-pre
load-451716,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ab89ef46abc848e2155c500483fb6e2,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0ab89ef46abc848e2155c500483fb6e2,kubernetes.io/config.seen: 2025-10-17T20:15:34.029156649Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4cd68ac5da8546ddd179151175057683c5062dd295f975b3a9f58bb492e38f1d,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-451716,Uid:1a78c39b5921a80e047d06a4db6e9281,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760732135887205317,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-451716,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a78c39b5921a80e047d06a4db6e9281,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1a78c39b5921a80e047d06a4db6e9281,kubernetes.io/config.seen: 2025-10-17T20:
15:34.029155668Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4a2b6f440a55e916d8d01b155293ebb0b821db9dcf40e75872439e92fa4df1b0,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-451716,Uid:a464cee2ea17e9f43a84555acb69e3ef,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760732135868243519,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-451716,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a464cee2ea17e9f43a84555acb69e3ef,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.41:8443,kubernetes.io/config.hash: a464cee2ea17e9f43a84555acb69e3ef,kubernetes.io/config.seen: 2025-10-17T20:15:34.029151357Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=6818e3f6-308b-4175-8929-1c380b135d15 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 17 20:15:52 test-preload-451716 crio[838]: time="2025-10-17 20:15:52.297471385Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b51c5a7d-7460-433d-b480-859821b61727 name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 20:15:52 test-preload-451716 crio[838]: time="2025-10-17 20:15:52.297561225Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b51c5a7d-7460-433d-b480-859821b61727 name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 20:15:52 test-preload-451716 crio[838]: time="2025-10-17 20:15:52.297777902Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9b6ee9c184c8af32274b3775f3d735e23f6941245c832755c803fc91d02ece37,PodSandboxId:cdef3f42382347d222502b4b0c398d13daab1d8bbab3848f9573c4507a76bcd0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760732144129221971,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-rg2hx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dc0c4dd-408a-4e61-aea4-b380c18474fc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce2a6f43a85f873938e220dd04f4a680ed492a07d689b18b8bebefc88d76b1c,PodSandboxId:a650d6bdec3f9c5314ed9dff166bc9f43c830978463d7565bce9fbb4ce2c8c98,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760732141269095087,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 7a6bb7aa-7ae2-4abd-b8b5-cd5117243cc6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f76a8df5a31b12d701ad4ee770511aaae9480f1021eb187c5debec8df7474a3,PodSandboxId:b46e65c1a2b3da0140fb92ce9a3e77b9bc117e99d54e8524ccda0c8807270705,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760732140597591040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hmwmf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11
4544c6-c3b7-4099-b901-e830fdcd3b29,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ef2ac09e7506d26e7611ce8f6fc19a0655fef5839b38d9b2a3002cfa7515052,PodSandboxId:a650d6bdec3f9c5314ed9dff166bc9f43c830978463d7565bce9fbb4ce2c8c98,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1760732140565079624,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a6bb7aa-7ae2-4
abd-b8b5-cd5117243cc6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbcbd5f248b4b125e60b1e08e9c88a76d87ee8a4758e7e91dcea225655de4cee,PodSandboxId:5a40dada1faf3eb6c37b399d82bdbd7db3094a573d03936917270040b74b5265,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760732136150573428,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-451716,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c1bc00c9147a46ad4a6f20356536ea5,},Annotations:map[s
tring]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ae3bbf420fc94498954b031476e79c26ca863026d6b038c18bf6e433263888d,PodSandboxId:53e4154c4319e85c425e07007e704f1181391e6711a67c16318125e8e4a2a93b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760732136145620724,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-451716,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ab89ef46abc848e2155c500483fb6e2,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b67f17112cf2815a804580dcec7b4e673bcea0387bb4368281f7f50909e6f88,PodSandboxId:4cd68ac5da8546ddd179151175057683c5062dd295f975b3a9f58bb492e38f1d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760732136108327872,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-451716,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a78c39b5921a80e047d06a4db6e9281,},Annotations:
map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d395d8351cec6dcef35840a3089902404274cbbf435997ed2569e9320cc4a9f,PodSandboxId:4a2b6f440a55e916d8d01b155293ebb0b821db9dcf40e75872439e92fa4df1b0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760732136079780137,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-451716,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a464cee2ea17e9f43a84555acb69e3ef,},Annotations:map[string]
string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b51c5a7d-7460-433d-b480-859821b61727 name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 20:15:52 test-preload-451716 crio[838]: time="2025-10-17 20:15:52.318925571Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=49098208-b14e-42fc-ae59-2949d630779a name=/runtime.v1.RuntimeService/Version
	Oct 17 20:15:52 test-preload-451716 crio[838]: time="2025-10-17 20:15:52.319130974Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=49098208-b14e-42fc-ae59-2949d630779a name=/runtime.v1.RuntimeService/Version
	Oct 17 20:15:52 test-preload-451716 crio[838]: time="2025-10-17 20:15:52.321009896Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cf269229-2c4c-4cdb-abd2-603352a2334e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 17 20:15:52 test-preload-451716 crio[838]: time="2025-10-17 20:15:52.321583304Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760732152321482696,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cf269229-2c4c-4cdb-abd2-603352a2334e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 17 20:15:52 test-preload-451716 crio[838]: time="2025-10-17 20:15:52.322438152Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d17e4d96-fb7b-434b-8141-a8d4fd868497 name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 20:15:52 test-preload-451716 crio[838]: time="2025-10-17 20:15:52.322494375Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d17e4d96-fb7b-434b-8141-a8d4fd868497 name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 20:15:52 test-preload-451716 crio[838]: time="2025-10-17 20:15:52.322724834Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9b6ee9c184c8af32274b3775f3d735e23f6941245c832755c803fc91d02ece37,PodSandboxId:cdef3f42382347d222502b4b0c398d13daab1d8bbab3848f9573c4507a76bcd0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760732144129221971,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-rg2hx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dc0c4dd-408a-4e61-aea4-b380c18474fc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce2a6f43a85f873938e220dd04f4a680ed492a07d689b18b8bebefc88d76b1c,PodSandboxId:a650d6bdec3f9c5314ed9dff166bc9f43c830978463d7565bce9fbb4ce2c8c98,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760732141269095087,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 7a6bb7aa-7ae2-4abd-b8b5-cd5117243cc6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f76a8df5a31b12d701ad4ee770511aaae9480f1021eb187c5debec8df7474a3,PodSandboxId:b46e65c1a2b3da0140fb92ce9a3e77b9bc117e99d54e8524ccda0c8807270705,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760732140597591040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hmwmf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11
4544c6-c3b7-4099-b901-e830fdcd3b29,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ef2ac09e7506d26e7611ce8f6fc19a0655fef5839b38d9b2a3002cfa7515052,PodSandboxId:a650d6bdec3f9c5314ed9dff166bc9f43c830978463d7565bce9fbb4ce2c8c98,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1760732140565079624,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a6bb7aa-7ae2-4
abd-b8b5-cd5117243cc6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbcbd5f248b4b125e60b1e08e9c88a76d87ee8a4758e7e91dcea225655de4cee,PodSandboxId:5a40dada1faf3eb6c37b399d82bdbd7db3094a573d03936917270040b74b5265,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760732136150573428,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-451716,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c1bc00c9147a46ad4a6f20356536ea5,},Annotations:map[s
tring]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ae3bbf420fc94498954b031476e79c26ca863026d6b038c18bf6e433263888d,PodSandboxId:53e4154c4319e85c425e07007e704f1181391e6711a67c16318125e8e4a2a93b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760732136145620724,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-451716,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ab89ef46abc848e2155c500483fb6e2,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b67f17112cf2815a804580dcec7b4e673bcea0387bb4368281f7f50909e6f88,PodSandboxId:4cd68ac5da8546ddd179151175057683c5062dd295f975b3a9f58bb492e38f1d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760732136108327872,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-451716,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a78c39b5921a80e047d06a4db6e9281,},Annotations:
map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d395d8351cec6dcef35840a3089902404274cbbf435997ed2569e9320cc4a9f,PodSandboxId:4a2b6f440a55e916d8d01b155293ebb0b821db9dcf40e75872439e92fa4df1b0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760732136079780137,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-451716,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a464cee2ea17e9f43a84555acb69e3ef,},Annotations:map[string]
string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d17e4d96-fb7b-434b-8141-a8d4fd868497 name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 20:15:52 test-preload-451716 crio[838]: time="2025-10-17 20:15:52.362252471Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f48b9b4b-97c3-4d56-9145-2832ead3feda name=/runtime.v1.RuntimeService/Version
	Oct 17 20:15:52 test-preload-451716 crio[838]: time="2025-10-17 20:15:52.362322940Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f48b9b4b-97c3-4d56-9145-2832ead3feda name=/runtime.v1.RuntimeService/Version
	Oct 17 20:15:52 test-preload-451716 crio[838]: time="2025-10-17 20:15:52.363334220Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6635c52e-c26f-461b-ba3d-b9bc78d201e2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 17 20:15:52 test-preload-451716 crio[838]: time="2025-10-17 20:15:52.363789183Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760732152363765963,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6635c52e-c26f-461b-ba3d-b9bc78d201e2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 17 20:15:52 test-preload-451716 crio[838]: time="2025-10-17 20:15:52.364649698Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0988a09b-5903-4482-aaf4-84c27c7ba180 name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 20:15:52 test-preload-451716 crio[838]: time="2025-10-17 20:15:52.364984193Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0988a09b-5903-4482-aaf4-84c27c7ba180 name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 20:15:52 test-preload-451716 crio[838]: time="2025-10-17 20:15:52.365510503Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9b6ee9c184c8af32274b3775f3d735e23f6941245c832755c803fc91d02ece37,PodSandboxId:cdef3f42382347d222502b4b0c398d13daab1d8bbab3848f9573c4507a76bcd0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760732144129221971,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-rg2hx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dc0c4dd-408a-4e61-aea4-b380c18474fc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce2a6f43a85f873938e220dd04f4a680ed492a07d689b18b8bebefc88d76b1c,PodSandboxId:a650d6bdec3f9c5314ed9dff166bc9f43c830978463d7565bce9fbb4ce2c8c98,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760732141269095087,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 7a6bb7aa-7ae2-4abd-b8b5-cd5117243cc6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f76a8df5a31b12d701ad4ee770511aaae9480f1021eb187c5debec8df7474a3,PodSandboxId:b46e65c1a2b3da0140fb92ce9a3e77b9bc117e99d54e8524ccda0c8807270705,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760732140597591040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hmwmf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11
4544c6-c3b7-4099-b901-e830fdcd3b29,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ef2ac09e7506d26e7611ce8f6fc19a0655fef5839b38d9b2a3002cfa7515052,PodSandboxId:a650d6bdec3f9c5314ed9dff166bc9f43c830978463d7565bce9fbb4ce2c8c98,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1760732140565079624,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a6bb7aa-7ae2-4
abd-b8b5-cd5117243cc6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbcbd5f248b4b125e60b1e08e9c88a76d87ee8a4758e7e91dcea225655de4cee,PodSandboxId:5a40dada1faf3eb6c37b399d82bdbd7db3094a573d03936917270040b74b5265,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760732136150573428,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-451716,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c1bc00c9147a46ad4a6f20356536ea5,},Annotations:map[s
tring]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ae3bbf420fc94498954b031476e79c26ca863026d6b038c18bf6e433263888d,PodSandboxId:53e4154c4319e85c425e07007e704f1181391e6711a67c16318125e8e4a2a93b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760732136145620724,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-451716,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ab89ef46abc848e2155c500483fb6e2,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b67f17112cf2815a804580dcec7b4e673bcea0387bb4368281f7f50909e6f88,PodSandboxId:4cd68ac5da8546ddd179151175057683c5062dd295f975b3a9f58bb492e38f1d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760732136108327872,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-451716,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a78c39b5921a80e047d06a4db6e9281,},Annotations:
map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d395d8351cec6dcef35840a3089902404274cbbf435997ed2569e9320cc4a9f,PodSandboxId:4a2b6f440a55e916d8d01b155293ebb0b821db9dcf40e75872439e92fa4df1b0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760732136079780137,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-451716,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a464cee2ea17e9f43a84555acb69e3ef,},Annotations:map[string]
string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0988a09b-5903-4482-aaf4-84c27c7ba180 name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 20:15:52 test-preload-451716 crio[838]: time="2025-10-17 20:15:52.403231411Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=50af2485-7ad8-4319-84fd-8ed5e27ff01e name=/runtime.v1.RuntimeService/Version
	Oct 17 20:15:52 test-preload-451716 crio[838]: time="2025-10-17 20:15:52.403320466Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=50af2485-7ad8-4319-84fd-8ed5e27ff01e name=/runtime.v1.RuntimeService/Version
	Oct 17 20:15:52 test-preload-451716 crio[838]: time="2025-10-17 20:15:52.404779099Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=79eda5f5-3629-412d-85bc-b7cabbba7a79 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 17 20:15:52 test-preload-451716 crio[838]: time="2025-10-17 20:15:52.405980834Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760732152405949738,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=79eda5f5-3629-412d-85bc-b7cabbba7a79 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 17 20:15:52 test-preload-451716 crio[838]: time="2025-10-17 20:15:52.406985981Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=40810828-5d55-496e-93d5-b26f5c993dbf name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 20:15:52 test-preload-451716 crio[838]: time="2025-10-17 20:15:52.407234727Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=40810828-5d55-496e-93d5-b26f5c993dbf name=/runtime.v1.RuntimeService/ListContainers
	Oct 17 20:15:52 test-preload-451716 crio[838]: time="2025-10-17 20:15:52.407609287Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9b6ee9c184c8af32274b3775f3d735e23f6941245c832755c803fc91d02ece37,PodSandboxId:cdef3f42382347d222502b4b0c398d13daab1d8bbab3848f9573c4507a76bcd0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760732144129221971,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-rg2hx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dc0c4dd-408a-4e61-aea4-b380c18474fc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce2a6f43a85f873938e220dd04f4a680ed492a07d689b18b8bebefc88d76b1c,PodSandboxId:a650d6bdec3f9c5314ed9dff166bc9f43c830978463d7565bce9fbb4ce2c8c98,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760732141269095087,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 7a6bb7aa-7ae2-4abd-b8b5-cd5117243cc6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f76a8df5a31b12d701ad4ee770511aaae9480f1021eb187c5debec8df7474a3,PodSandboxId:b46e65c1a2b3da0140fb92ce9a3e77b9bc117e99d54e8524ccda0c8807270705,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760732140597591040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hmwmf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11
4544c6-c3b7-4099-b901-e830fdcd3b29,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ef2ac09e7506d26e7611ce8f6fc19a0655fef5839b38d9b2a3002cfa7515052,PodSandboxId:a650d6bdec3f9c5314ed9dff166bc9f43c830978463d7565bce9fbb4ce2c8c98,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1760732140565079624,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a6bb7aa-7ae2-4
abd-b8b5-cd5117243cc6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbcbd5f248b4b125e60b1e08e9c88a76d87ee8a4758e7e91dcea225655de4cee,PodSandboxId:5a40dada1faf3eb6c37b399d82bdbd7db3094a573d03936917270040b74b5265,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760732136150573428,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-451716,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c1bc00c9147a46ad4a6f20356536ea5,},Annotations:map[s
tring]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ae3bbf420fc94498954b031476e79c26ca863026d6b038c18bf6e433263888d,PodSandboxId:53e4154c4319e85c425e07007e704f1181391e6711a67c16318125e8e4a2a93b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760732136145620724,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-451716,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ab89ef46abc848e2155c500483fb6e2,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b67f17112cf2815a804580dcec7b4e673bcea0387bb4368281f7f50909e6f88,PodSandboxId:4cd68ac5da8546ddd179151175057683c5062dd295f975b3a9f58bb492e38f1d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760732136108327872,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-451716,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a78c39b5921a80e047d06a4db6e9281,},Annotations:
map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d395d8351cec6dcef35840a3089902404274cbbf435997ed2569e9320cc4a9f,PodSandboxId:4a2b6f440a55e916d8d01b155293ebb0b821db9dcf40e75872439e92fa4df1b0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760732136079780137,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-451716,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a464cee2ea17e9f43a84555acb69e3ef,},Annotations:map[string]
string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=40810828-5d55-496e-93d5-b26f5c993dbf name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9b6ee9c184c8a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   8 seconds ago       Running             coredns                   1                   cdef3f4238234       coredns-668d6bf9bc-rg2hx
	cce2a6f43a85f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   11 seconds ago      Running             storage-provisioner       2                   a650d6bdec3f9       storage-provisioner
	3f76a8df5a31b       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   11 seconds ago      Running             kube-proxy                1                   b46e65c1a2b3d       kube-proxy-hmwmf
	8ef2ac09e7506       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   11 seconds ago      Exited              storage-provisioner       1                   a650d6bdec3f9       storage-provisioner
	fbcbd5f248b4b       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   16 seconds ago      Running             etcd                      1                   5a40dada1faf3       etcd-test-preload-451716
	7ae3bbf420fc9       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   16 seconds ago      Running             kube-scheduler            1                   53e4154c4319e       kube-scheduler-test-preload-451716
	1b67f17112cf2       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   16 seconds ago      Running             kube-controller-manager   1                   4cd68ac5da854       kube-controller-manager-test-preload-451716
	0d395d8351cec       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   16 seconds ago      Running             kube-apiserver            1                   4a2b6f440a55e       kube-apiserver-test-preload-451716
	
	
	==> coredns [9b6ee9c184c8af32274b3775f3d735e23f6941245c832755c803fc91d02ece37] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:47001 - 31564 "HINFO IN 8045419330774906975.3930998809372848628. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.069151917s
	
	
	==> describe nodes <==
	Name:               test-preload-451716
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-451716
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=test-preload-451716
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T20_14_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:14:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-451716
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:15:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:15:41 +0000   Fri, 17 Oct 2025 20:14:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:15:41 +0000   Fri, 17 Oct 2025 20:14:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:15:41 +0000   Fri, 17 Oct 2025 20:14:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:15:41 +0000   Fri, 17 Oct 2025 20:15:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.41
	  Hostname:    test-preload-451716
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 b8ec4931545f47058b1bfcdf0b743dc1
	  System UUID:                b8ec4931-545f-4705-8b1b-fcdf0b743dc1
	  Boot ID:                    ee0ee6cf-5086-4562-83b3-b7670e4780f8
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-rg2hx                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     74s
	  kube-system                 etcd-test-preload-451716                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         80s
	  kube-system                 kube-apiserver-test-preload-451716             250m (12%)    0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-controller-manager-test-preload-451716    200m (10%)    0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 kube-proxy-hmwmf                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 kube-scheduler-test-preload-451716             100m (5%)     0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 73s                kube-proxy       
	  Normal   Starting                 11s                kube-proxy       
	  Normal   NodeHasSufficientMemory  85s (x8 over 85s)  kubelet          Node test-preload-451716 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    85s (x8 over 85s)  kubelet          Node test-preload-451716 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     85s (x7 over 85s)  kubelet          Node test-preload-451716 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    79s                kubelet          Node test-preload-451716 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  79s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  79s                kubelet          Node test-preload-451716 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     79s                kubelet          Node test-preload-451716 status is now: NodeHasSufficientPID
	  Normal   Starting                 79s                kubelet          Starting kubelet.
	  Normal   NodeReady                78s                kubelet          Node test-preload-451716 status is now: NodeReady
	  Normal   RegisteredNode           75s                node-controller  Node test-preload-451716 event: Registered Node test-preload-451716 in Controller
	  Normal   Starting                 18s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  18s (x8 over 18s)  kubelet          Node test-preload-451716 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18s (x8 over 18s)  kubelet          Node test-preload-451716 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18s (x7 over 18s)  kubelet          Node test-preload-451716 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  18s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 13s                kubelet          Node test-preload-451716 has been rebooted, boot id: ee0ee6cf-5086-4562-83b3-b7670e4780f8
	  Normal   RegisteredNode           10s                node-controller  Node test-preload-451716 event: Registered Node test-preload-451716 in Controller
	
	
	==> dmesg <==
	[Oct17 20:15] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000041] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.001128] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.970148] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000019] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000004] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.084806] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.096040] kauditd_printk_skb: 102 callbacks suppressed
	[  +6.533436] kauditd_printk_skb: 177 callbacks suppressed
	[  +4.361050] kauditd_printk_skb: 212 callbacks suppressed
	
	
	==> etcd [fbcbd5f248b4b125e60b1e08e9c88a76d87ee8a4758e7e91dcea225655de4cee] <==
	{"level":"info","ts":"2025-10-17T20:15:36.579711Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b5cacf25c2f2940e","local-member-id":"903e0dada8362847","added-peer-id":"903e0dada8362847","added-peer-peer-urls":["https://192.168.39.41:2380"]}
	{"level":"info","ts":"2025-10-17T20:15:36.579832Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b5cacf25c2f2940e","local-member-id":"903e0dada8362847","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-17T20:15:36.582360Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-17T20:15:36.579873Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-17T20:15:36.586872Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-17T20:15:36.592046Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.41:2380"}
	{"level":"info","ts":"2025-10-17T20:15:36.592076Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.41:2380"}
	{"level":"info","ts":"2025-10-17T20:15:36.592281Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"903e0dada8362847","initial-advertise-peer-urls":["https://192.168.39.41:2380"],"listen-peer-urls":["https://192.168.39.41:2380"],"advertise-client-urls":["https://192.168.39.41:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.41:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-17T20:15:36.592349Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-17T20:15:38.158599Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-17T20:15:38.158640Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-17T20:15:38.158694Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 received MsgPreVoteResp from 903e0dada8362847 at term 2"}
	{"level":"info","ts":"2025-10-17T20:15:38.158709Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 became candidate at term 3"}
	{"level":"info","ts":"2025-10-17T20:15:38.158722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 received MsgVoteResp from 903e0dada8362847 at term 3"}
	{"level":"info","ts":"2025-10-17T20:15:38.158731Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 became leader at term 3"}
	{"level":"info","ts":"2025-10-17T20:15:38.158737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 903e0dada8362847 elected leader 903e0dada8362847 at term 3"}
	{"level":"info","ts":"2025-10-17T20:15:38.160832Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"903e0dada8362847","local-member-attributes":"{Name:test-preload-451716 ClientURLs:[https://192.168.39.41:2379]}","request-path":"/0/members/903e0dada8362847/attributes","cluster-id":"b5cacf25c2f2940e","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-17T20:15:38.160846Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-17T20:15:38.161073Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-17T20:15:38.162149Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-17T20:15:38.163150Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-17T20:15:38.162210Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-17T20:15:38.162538Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-17T20:15:38.165449Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.41:2379"}
	{"level":"info","ts":"2025-10-17T20:15:38.165554Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 20:15:52 up 0 min,  0 users,  load average: 0.80, 0.22, 0.07
	Linux test-preload-451716 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [0d395d8351cec6dcef35840a3089902404274cbbf435997ed2569e9320cc4a9f] <==
	I1017 20:15:39.376752       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 20:15:39.379652       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1017 20:15:39.380039       1 policy_source.go:240] refreshing policies
	I1017 20:15:39.384751       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 20:15:39.385646       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1017 20:15:39.387765       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 20:15:39.393306       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1017 20:15:39.393321       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1017 20:15:39.393407       1 shared_informer.go:320] Caches are synced for configmaps
	I1017 20:15:39.393436       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1017 20:15:39.393764       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1017 20:15:39.393933       1 aggregator.go:171] initial CRD sync complete...
	I1017 20:15:39.393964       1 autoregister_controller.go:144] Starting autoregister controller
	I1017 20:15:39.393979       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 20:15:39.393994       1 cache.go:39] Caches are synced for autoregister controller
	E1017 20:15:39.425779       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 20:15:40.129428       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1017 20:15:40.285280       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 20:15:40.894729       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1017 20:15:40.949387       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1017 20:15:40.985594       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 20:15:41.008368       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 20:15:42.780502       1 controller.go:615] quota admission added evaluator for: endpoints
	I1017 20:15:42.882119       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 20:15:42.983017       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [1b67f17112cf2815a804580dcec7b4e673bcea0387bb4368281f7f50909e6f88] <==
	I1017 20:15:42.603272       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 20:15:42.603283       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 20:15:42.613993       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I1017 20:15:42.616340       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I1017 20:15:42.617567       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1017 20:15:42.617635       1 shared_informer.go:320] Caches are synced for HPA
	I1017 20:15:42.617786       1 shared_informer.go:320] Caches are synced for garbage collector
	I1017 20:15:42.618963       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1017 20:15:42.622257       1 shared_informer.go:320] Caches are synced for disruption
	I1017 20:15:42.625946       1 shared_informer.go:320] Caches are synced for taint
	I1017 20:15:42.626388       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1017 20:15:42.626999       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="test-preload-451716"
	I1017 20:15:42.627076       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1017 20:15:42.628189       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1017 20:15:42.628601       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I1017 20:15:42.628559       1 shared_informer.go:320] Caches are synced for persistent volume
	I1017 20:15:42.628569       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1017 20:15:42.628578       1 shared_informer.go:320] Caches are synced for cronjob
	I1017 20:15:42.635033       1 shared_informer.go:320] Caches are synced for daemon sets
	I1017 20:15:42.639357       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1017 20:15:42.990430       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="361.708828ms"
	I1017 20:15:42.990519       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="46.211µs"
	I1017 20:15:44.279672       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="51.82µs"
	I1017 20:15:48.302039       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="18.920462ms"
	I1017 20:15:48.302124       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="43.531µs"
	
	
	==> kube-proxy [3f76a8df5a31b12d701ad4ee770511aaae9480f1021eb187c5debec8df7474a3] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1017 20:15:40.900851       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1017 20:15:40.917856       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.41"]
	E1017 20:15:40.918034       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 20:15:40.969722       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1017 20:15:40.969762       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1017 20:15:40.969785       1 server_linux.go:170] "Using iptables Proxier"
	I1017 20:15:40.974995       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 20:15:40.975552       1 server.go:497] "Version info" version="v1.32.0"
	I1017 20:15:40.975612       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:15:40.978987       1 config.go:199] "Starting service config controller"
	I1017 20:15:40.979148       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1017 20:15:40.979728       1 config.go:105] "Starting endpoint slice config controller"
	I1017 20:15:40.979754       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1017 20:15:40.980502       1 config.go:329] "Starting node config controller"
	I1017 20:15:40.980592       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1017 20:15:41.080182       1 shared_informer.go:320] Caches are synced for service config
	I1017 20:15:41.080203       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1017 20:15:41.082425       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7ae3bbf420fc94498954b031476e79c26ca863026d6b038c18bf6e433263888d] <==
	I1017 20:15:37.408193       1 serving.go:386] Generated self-signed cert in-memory
	W1017 20:15:39.316108       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1017 20:15:39.316145       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1017 20:15:39.316155       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1017 20:15:39.316165       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1017 20:15:39.390119       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1017 20:15:39.390158       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:15:39.400432       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1017 20:15:39.401048       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:15:39.405043       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1017 20:15:39.401068       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 20:15:39.505279       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 17 20:15:39 test-preload-451716 kubelet[1160]: I1017 20:15:39.445471    1160 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-451716"
	Oct 17 20:15:39 test-preload-451716 kubelet[1160]: E1017 20:15:39.468455    1160 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-451716\" already exists" pod="kube-system/kube-scheduler-test-preload-451716"
	Oct 17 20:15:39 test-preload-451716 kubelet[1160]: I1017 20:15:39.468494    1160 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-451716"
	Oct 17 20:15:39 test-preload-451716 kubelet[1160]: E1017 20:15:39.477536    1160 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-451716\" already exists" pod="kube-system/etcd-test-preload-451716"
	Oct 17 20:15:39 test-preload-451716 kubelet[1160]: I1017 20:15:39.477644    1160 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-test-preload-451716"
	Oct 17 20:15:39 test-preload-451716 kubelet[1160]: E1017 20:15:39.487479    1160 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-451716\" already exists" pod="kube-system/kube-apiserver-test-preload-451716"
	Oct 17 20:15:39 test-preload-451716 kubelet[1160]: I1017 20:15:39.487782    1160 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-test-preload-451716"
	Oct 17 20:15:39 test-preload-451716 kubelet[1160]: E1017 20:15:39.498728    1160 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-451716\" already exists" pod="kube-system/kube-controller-manager-test-preload-451716"
	Oct 17 20:15:40 test-preload-451716 kubelet[1160]: I1017 20:15:40.031939    1160 apiserver.go:52] "Watching apiserver"
	Oct 17 20:15:40 test-preload-451716 kubelet[1160]: E1017 20:15:40.035466    1160 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-rg2hx" podUID="8dc0c4dd-408a-4e61-aea4-b380c18474fc"
	Oct 17 20:15:40 test-preload-451716 kubelet[1160]: I1017 20:15:40.041753    1160 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Oct 17 20:15:40 test-preload-451716 kubelet[1160]: I1017 20:15:40.122407    1160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/114544c6-c3b7-4099-b901-e830fdcd3b29-xtables-lock\") pod \"kube-proxy-hmwmf\" (UID: \"114544c6-c3b7-4099-b901-e830fdcd3b29\") " pod="kube-system/kube-proxy-hmwmf"
	Oct 17 20:15:40 test-preload-451716 kubelet[1160]: I1017 20:15:40.123023    1160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/114544c6-c3b7-4099-b901-e830fdcd3b29-lib-modules\") pod \"kube-proxy-hmwmf\" (UID: \"114544c6-c3b7-4099-b901-e830fdcd3b29\") " pod="kube-system/kube-proxy-hmwmf"
	Oct 17 20:15:40 test-preload-451716 kubelet[1160]: I1017 20:15:40.123118    1160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7a6bb7aa-7ae2-4abd-b8b5-cd5117243cc6-tmp\") pod \"storage-provisioner\" (UID: \"7a6bb7aa-7ae2-4abd-b8b5-cd5117243cc6\") " pod="kube-system/storage-provisioner"
	Oct 17 20:15:40 test-preload-451716 kubelet[1160]: E1017 20:15:40.123206    1160 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 17 20:15:40 test-preload-451716 kubelet[1160]: E1017 20:15:40.123266    1160 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8dc0c4dd-408a-4e61-aea4-b380c18474fc-config-volume podName:8dc0c4dd-408a-4e61-aea4-b380c18474fc nodeName:}" failed. No retries permitted until 2025-10-17 20:15:40.623245596 +0000 UTC m=+6.699490442 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8dc0c4dd-408a-4e61-aea4-b380c18474fc-config-volume") pod "coredns-668d6bf9bc-rg2hx" (UID: "8dc0c4dd-408a-4e61-aea4-b380c18474fc") : object "kube-system"/"coredns" not registered
	Oct 17 20:15:40 test-preload-451716 kubelet[1160]: E1017 20:15:40.627504    1160 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 17 20:15:40 test-preload-451716 kubelet[1160]: E1017 20:15:40.627572    1160 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8dc0c4dd-408a-4e61-aea4-b380c18474fc-config-volume podName:8dc0c4dd-408a-4e61-aea4-b380c18474fc nodeName:}" failed. No retries permitted until 2025-10-17 20:15:41.627553125 +0000 UTC m=+7.703797970 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8dc0c4dd-408a-4e61-aea4-b380c18474fc-config-volume") pod "coredns-668d6bf9bc-rg2hx" (UID: "8dc0c4dd-408a-4e61-aea4-b380c18474fc") : object "kube-system"/"coredns" not registered
	Oct 17 20:15:41 test-preload-451716 kubelet[1160]: I1017 20:15:41.150680    1160 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	Oct 17 20:15:41 test-preload-451716 kubelet[1160]: I1017 20:15:41.242413    1160 scope.go:117] "RemoveContainer" containerID="8ef2ac09e7506d26e7611ce8f6fc19a0655fef5839b38d9b2a3002cfa7515052"
	Oct 17 20:15:41 test-preload-451716 kubelet[1160]: E1017 20:15:41.637660    1160 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 17 20:15:41 test-preload-451716 kubelet[1160]: E1017 20:15:41.637740    1160 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8dc0c4dd-408a-4e61-aea4-b380c18474fc-config-volume podName:8dc0c4dd-408a-4e61-aea4-b380c18474fc nodeName:}" failed. No retries permitted until 2025-10-17 20:15:43.637726305 +0000 UTC m=+9.713971139 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8dc0c4dd-408a-4e61-aea4-b380c18474fc-config-volume") pod "coredns-668d6bf9bc-rg2hx" (UID: "8dc0c4dd-408a-4e61-aea4-b380c18474fc") : object "kube-system"/"coredns" not registered
	Oct 17 20:15:44 test-preload-451716 kubelet[1160]: E1017 20:15:44.103771    1160 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760732144100859149,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 17 20:15:44 test-preload-451716 kubelet[1160]: E1017 20:15:44.103836    1160 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760732144100859149,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 17 20:15:48 test-preload-451716 kubelet[1160]: I1017 20:15:48.262821    1160 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [8ef2ac09e7506d26e7611ce8f6fc19a0655fef5839b38d9b2a3002cfa7515052] <==
	I1017 20:15:40.753538       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1017 20:15:40.758303       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [cce2a6f43a85f873938e220dd04f4a680ed492a07d689b18b8bebefc88d76b1c] <==
	I1017 20:15:41.465463       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 20:15:41.485870       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 20:15:41.486366       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-451716 -n test-preload-451716
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-451716 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-451716" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-451716
--- FAIL: TestPreload (131.99s)

                                                
                                    

Test pass (288/330)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 22.65
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.15
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 12.93
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.72
18 TestDownloadOnly/v1.34.1/DeleteAll 0.14
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.68
22 TestOffline 100.4
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 199.81
31 TestAddons/serial/GCPAuth/Namespaces 0.15
32 TestAddons/serial/GCPAuth/FakeCredentials 11.52
35 TestAddons/parallel/Registry 17.69
36 TestAddons/parallel/RegistryCreds 0.73
38 TestAddons/parallel/InspektorGadget 6.35
39 TestAddons/parallel/MetricsServer 6.2
41 TestAddons/parallel/CSI 45.87
42 TestAddons/parallel/Headlamp 20.13
43 TestAddons/parallel/CloudSpanner 5.96
44 TestAddons/parallel/LocalPath 57.82
45 TestAddons/parallel/NvidiaDevicePlugin 7.15
46 TestAddons/parallel/Yakd 12.2
48 TestAddons/StoppedEnableDisable 86.65
49 TestCertOptions 85.4
50 TestCertExpiration 295.03
52 TestForceSystemdFlag 50.9
53 TestForceSystemdEnv 41.04
55 TestKVMDriverInstallOrUpdate 1.58
59 TestErrorSpam/setup 39.57
60 TestErrorSpam/start 0.38
61 TestErrorSpam/status 0.83
62 TestErrorSpam/pause 1.7
63 TestErrorSpam/unpause 1.81
64 TestErrorSpam/stop 86.75
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 55.12
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 31.99
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.07
75 TestFunctional/serial/CacheCmd/cache/add_remote 4.94
76 TestFunctional/serial/CacheCmd/cache/add_local 2.64
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
80 TestFunctional/serial/CacheCmd/cache/cache_reload 2.21
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 31.76
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.49
87 TestFunctional/serial/LogsFileCmd 1.47
88 TestFunctional/serial/InvalidService 4.13
90 TestFunctional/parallel/ConfigCmd 0.36
91 TestFunctional/parallel/DashboardCmd 25.45
92 TestFunctional/parallel/DryRun 0.3
93 TestFunctional/parallel/InternationalLanguage 0.17
94 TestFunctional/parallel/StatusCmd 1.36
98 TestFunctional/parallel/ServiceCmdConnect 10.63
99 TestFunctional/parallel/AddonsCmd 0.15
100 TestFunctional/parallel/PersistentVolumeClaim 48.32
102 TestFunctional/parallel/SSHCmd 0.47
103 TestFunctional/parallel/CpCmd 1.39
104 TestFunctional/parallel/MySQL 23.79
105 TestFunctional/parallel/FileSync 0.24
106 TestFunctional/parallel/CertSync 1.58
110 TestFunctional/parallel/NodeLabels 0.07
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.4
114 TestFunctional/parallel/License 0.78
115 TestFunctional/parallel/ServiceCmd/DeployApp 10.21
125 TestFunctional/parallel/Version/short 0.06
126 TestFunctional/parallel/Version/components 0.64
127 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
128 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
129 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
130 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
131 TestFunctional/parallel/ImageCommands/ImageBuild 3.92
132 TestFunctional/parallel/ImageCommands/Setup 1.81
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.22
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.89
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.73
136 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.53
137 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.1
139 TestFunctional/parallel/ServiceCmd/List 0.34
140 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.41
141 TestFunctional/parallel/ServiceCmd/JSONOutput 0.33
142 TestFunctional/parallel/ServiceCmd/HTTPS 0.47
143 TestFunctional/parallel/ServiceCmd/Format 0.38
144 TestFunctional/parallel/ServiceCmd/URL 0.39
145 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
146 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
147 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
148 TestFunctional/parallel/ProfileCmd/profile_not_create 0.66
149 TestFunctional/parallel/MountCmd/any-port 21.26
150 TestFunctional/parallel/ProfileCmd/profile_list 0.54
151 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
152 TestFunctional/parallel/MountCmd/specific-port 2.12
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.69
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 200.6
162 TestMultiControlPlane/serial/DeployApp 6.9
163 TestMultiControlPlane/serial/PingHostFromPods 1.24
164 TestMultiControlPlane/serial/AddWorkerNode 43.59
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.94
167 TestMultiControlPlane/serial/CopyFile 13.57
168 TestMultiControlPlane/serial/StopSecondaryNode 81.57
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.7
170 TestMultiControlPlane/serial/RestartSecondaryNode 42.98
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.17
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 390.31
173 TestMultiControlPlane/serial/DeleteSecondaryNode 18.64
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.69
175 TestMultiControlPlane/serial/StopCluster 268.12
176 TestMultiControlPlane/serial/RestartCluster 88.25
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.7
178 TestMultiControlPlane/serial/AddSecondaryNode 83.64
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.92
183 TestJSONOutput/start/Command 55.92
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.75
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.68
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 6.81
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.22
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 79.92
215 TestMountStart/serial/StartWithMountFirst 23.86
216 TestMountStart/serial/VerifyMountFirst 0.37
217 TestMountStart/serial/StartWithMountSecond 20.95
218 TestMountStart/serial/VerifyMountSecond 0.38
219 TestMountStart/serial/DeleteFirst 0.73
220 TestMountStart/serial/VerifyMountPostDelete 0.37
221 TestMountStart/serial/Stop 1.26
222 TestMountStart/serial/RestartStopped 19.78
223 TestMountStart/serial/VerifyMountPostStop 0.39
226 TestMultiNode/serial/FreshStart2Nodes 100.01
227 TestMultiNode/serial/DeployApp2Nodes 5.69
228 TestMultiNode/serial/PingHostFrom2Pods 0.83
229 TestMultiNode/serial/AddNode 46.29
230 TestMultiNode/serial/MultiNodeLabels 0.07
231 TestMultiNode/serial/ProfileList 0.62
232 TestMultiNode/serial/CopyFile 7.47
233 TestMultiNode/serial/StopNode 2.48
234 TestMultiNode/serial/StartAfterStop 38.62
235 TestMultiNode/serial/RestartKeepsNodes 298.91
236 TestMultiNode/serial/DeleteNode 2.87
237 TestMultiNode/serial/StopMultiNode 174.58
238 TestMultiNode/serial/RestartMultiNode 95.08
239 TestMultiNode/serial/ValidateNameConflict 40.06
246 TestScheduledStopUnix 111.8
250 TestRunningBinaryUpgrade 102.99
252 TestKubernetesUpgrade 160.3
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
256 TestNoKubernetes/serial/StartWithK8s 103.27
260 TestNoKubernetes/serial/StartWithStopK8s 7.38
265 TestNetworkPlugins/group/false 3.84
269 TestNoKubernetes/serial/Start 21.83
270 TestStoppedBinaryUpgrade/Setup 2.63
271 TestStoppedBinaryUpgrade/Upgrade 128.75
272 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
273 TestNoKubernetes/serial/ProfileList 1.16
274 TestNoKubernetes/serial/Stop 1.46
275 TestNoKubernetes/serial/StartNoArgs 35.22
276 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
277 TestStoppedBinaryUpgrade/MinikubeLogs 1.06
286 TestPause/serial/Start 60.65
287 TestNetworkPlugins/group/auto/Start 90.31
288 TestNetworkPlugins/group/kindnet/Start 91.94
289 TestPause/serial/SecondStartNoReconfiguration 65.63
290 TestNetworkPlugins/group/calico/Start 101.53
291 TestNetworkPlugins/group/auto/KubeletFlags 0.25
292 TestNetworkPlugins/group/auto/NetCatPod 10.3
293 TestNetworkPlugins/group/auto/DNS 0.17
294 TestNetworkPlugins/group/auto/Localhost 0.13
295 TestNetworkPlugins/group/auto/HairPin 0.15
296 TestPause/serial/Pause 0.95
297 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
298 TestPause/serial/VerifyStatus 0.29
299 TestPause/serial/Unpause 0.79
300 TestPause/serial/PauseAgain 1.01
301 TestPause/serial/DeletePaused 0.93
302 TestPause/serial/VerifyDeletedResources 4.71
303 TestNetworkPlugins/group/custom-flannel/Start 71.87
304 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
305 TestNetworkPlugins/group/kindnet/NetCatPod 11.26
306 TestNetworkPlugins/group/enable-default-cni/Start 81.32
307 TestNetworkPlugins/group/kindnet/DNS 0.2
308 TestNetworkPlugins/group/kindnet/Localhost 0.16
309 TestNetworkPlugins/group/kindnet/HairPin 0.16
310 TestNetworkPlugins/group/flannel/Start 85.39
311 TestNetworkPlugins/group/calico/ControllerPod 6.01
312 TestNetworkPlugins/group/calico/KubeletFlags 0.26
313 TestNetworkPlugins/group/calico/NetCatPod 12.29
314 TestNetworkPlugins/group/calico/DNS 0.17
315 TestNetworkPlugins/group/calico/Localhost 0.16
316 TestNetworkPlugins/group/calico/HairPin 0.16
317 TestNetworkPlugins/group/bridge/Start 67.61
318 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
319 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.31
320 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
321 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.29
322 TestNetworkPlugins/group/custom-flannel/DNS 0.2
323 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
324 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
325 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
326 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
327 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
329 TestStartStop/group/old-k8s-version/serial/FirstStart 67.24
331 TestStartStop/group/no-preload/serial/FirstStart 95.97
332 TestNetworkPlugins/group/flannel/ControllerPod 6.01
333 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
334 TestNetworkPlugins/group/flannel/NetCatPod 10.24
335 TestNetworkPlugins/group/flannel/DNS 0.15
336 TestNetworkPlugins/group/flannel/Localhost 0.14
337 TestNetworkPlugins/group/flannel/HairPin 0.14
338 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
339 TestNetworkPlugins/group/bridge/NetCatPod 11.41
340 TestNetworkPlugins/group/bridge/DNS 0.17
341 TestNetworkPlugins/group/bridge/Localhost 0.16
342 TestNetworkPlugins/group/bridge/HairPin 0.17
344 TestStartStop/group/embed-certs/serial/FirstStart 63.43
346 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 65.95
347 TestStartStop/group/old-k8s-version/serial/DeployApp 10.37
348 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.29
349 TestStartStop/group/old-k8s-version/serial/Stop 72.92
350 TestStartStop/group/no-preload/serial/DeployApp 11.32
351 TestStartStop/group/embed-certs/serial/DeployApp 11.31
352 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.07
353 TestStartStop/group/no-preload/serial/Stop 89.1
354 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.04
355 TestStartStop/group/embed-certs/serial/Stop 84.59
356 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.28
357 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.06
358 TestStartStop/group/default-k8s-diff-port/serial/Stop 82.34
359 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
360 TestStartStop/group/old-k8s-version/serial/SecondStart 44.69
361 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 16.01
362 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
363 TestStartStop/group/no-preload/serial/SecondStart 57.62
364 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
365 TestStartStop/group/embed-certs/serial/SecondStart 62.67
366 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
367 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
368 TestStartStop/group/old-k8s-version/serial/Pause 2.92
370 TestStartStop/group/newest-cni/serial/FirstStart 68.16
371 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.25
372 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 83.22
373 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 11.01
374 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 10.01
375 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
376 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.33
377 TestStartStop/group/no-preload/serial/Pause 3.58
378 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
379 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
380 TestStartStop/group/embed-certs/serial/Pause 3.99
381 TestStartStop/group/newest-cni/serial/DeployApp 0
382 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.13
383 TestStartStop/group/newest-cni/serial/Stop 11.53
384 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
385 TestStartStop/group/newest-cni/serial/SecondStart 34.77
386 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
387 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
388 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
389 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.81
390 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
391 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
392 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.32
393 TestStartStop/group/newest-cni/serial/Pause 4.17
x
+
TestDownloadOnly/v1.28.0/json-events (22.65s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-497114 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-497114 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (22.646515527s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (22.65s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1017 19:22:05.580436  113592 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1017 19:22:05.580521  113592 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-109682/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-497114
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-497114: exit status 85 (66.256781ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-497114 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-497114 │ jenkins │ v1.37.0 │ 17 Oct 25 19:21 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:21:42
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:21:42.978557  113605 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:21:42.978859  113605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:21:42.978869  113605 out.go:374] Setting ErrFile to fd 2...
	I1017 19:21:42.978874  113605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:21:42.979115  113605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-109682/.minikube/bin
	W1017 19:21:42.979258  113605 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21664-109682/.minikube/config/config.json: open /home/jenkins/minikube-integration/21664-109682/.minikube/config/config.json: no such file or directory
	I1017 19:21:42.979765  113605 out.go:368] Setting JSON to true
	I1017 19:21:42.980656  113605 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":3844,"bootTime":1760725059,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:21:42.980767  113605 start.go:141] virtualization: kvm guest
	I1017 19:21:42.982776  113605 out.go:99] [download-only-497114] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1017 19:21:42.982940  113605 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21664-109682/.minikube/cache/preloaded-tarball: no such file or directory
	I1017 19:21:42.982942  113605 notify.go:220] Checking for updates...
	I1017 19:21:42.984294  113605 out.go:171] MINIKUBE_LOCATION=21664
	I1017 19:21:42.985494  113605 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:21:42.987220  113605 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21664-109682/kubeconfig
	I1017 19:21:42.988524  113605 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-109682/.minikube
	I1017 19:21:42.989633  113605 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1017 19:21:42.991704  113605 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1017 19:21:42.991960  113605 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:21:43.514087  113605 out.go:99] Using the kvm2 driver based on user configuration
	I1017 19:21:43.514141  113605 start.go:305] selected driver: kvm2
	I1017 19:21:43.514148  113605 start.go:925] validating driver "kvm2" against <nil>
	I1017 19:21:43.514460  113605 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:21:43.514581  113605 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21664-109682/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1017 19:21:43.530275  113605 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1017 19:21:43.530310  113605 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21664-109682/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1017 19:21:43.544480  113605 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1017 19:21:43.544530  113605 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 19:21:43.545042  113605 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1017 19:21:43.545202  113605 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1017 19:21:43.545225  113605 cni.go:84] Creating CNI manager for ""
	I1017 19:21:43.545261  113605 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1017 19:21:43.545272  113605 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1017 19:21:43.545327  113605 start.go:349] cluster config:
	{Name:download-only-497114 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-497114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:21:43.545499  113605 iso.go:125] acquiring lock: {Name:mk2487fdd858c1cb489b6312535f031f58d5b643 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:21:43.547664  113605 out.go:99] Downloading VM boot image ...
	I1017 19:21:43.547714  113605 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21664-109682/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1017 19:21:53.247174  113605 out.go:99] Starting "download-only-497114" primary control-plane node in "download-only-497114" cluster
	I1017 19:21:53.247209  113605 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1017 19:21:53.339739  113605 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1017 19:21:53.339775  113605 cache.go:58] Caching tarball of preloaded images
	I1017 19:21:53.339987  113605 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1017 19:21:53.341951  113605 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1017 19:21:53.341989  113605 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1017 19:21:53.442279  113605 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1017 19:21:53.442413  113605 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21664-109682/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-497114 host does not exist
	  To start a cluster, run: "minikube start -p download-only-497114"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-497114
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (12.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-651643 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-651643 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (12.93278705s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (12.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1017 19:22:18.875998  113592 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1017 19:22:18.876088  113592 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-109682/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.72s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-651643
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-651643: exit status 85 (723.044811ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-497114 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-497114 │ jenkins │ v1.37.0 │ 17 Oct 25 19:21 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                               │ minikube             │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ delete  │ -p download-only-497114                                                                                                                                                                             │ download-only-497114 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ start   │ -o=json --download-only -p download-only-651643 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-651643 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:22:05
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:22:05.985492  113871 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:22:05.985760  113871 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:22:05.985769  113871 out.go:374] Setting ErrFile to fd 2...
	I1017 19:22:05.985773  113871 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:22:05.985955  113871 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-109682/.minikube/bin
	I1017 19:22:05.986423  113871 out.go:368] Setting JSON to true
	I1017 19:22:05.987265  113871 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":3867,"bootTime":1760725059,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:22:05.987367  113871 start.go:141] virtualization: kvm guest
	I1017 19:22:05.989512  113871 out.go:99] [download-only-651643] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 19:22:05.989669  113871 notify.go:220] Checking for updates...
	I1017 19:22:05.991008  113871 out.go:171] MINIKUBE_LOCATION=21664
	I1017 19:22:05.992482  113871 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:22:05.993713  113871 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21664-109682/kubeconfig
	I1017 19:22:05.994993  113871 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-109682/.minikube
	I1017 19:22:05.996264  113871 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1017 19:22:05.998742  113871 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1017 19:22:05.999019  113871 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:22:06.030459  113871 out.go:99] Using the kvm2 driver based on user configuration
	I1017 19:22:06.030509  113871 start.go:305] selected driver: kvm2
	I1017 19:22:06.030519  113871 start.go:925] validating driver "kvm2" against <nil>
	I1017 19:22:06.030841  113871 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:22:06.030978  113871 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21664-109682/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1017 19:22:06.044862  113871 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1017 19:22:06.044899  113871 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21664-109682/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1017 19:22:06.061216  113871 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1017 19:22:06.061278  113871 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 19:22:06.061980  113871 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1017 19:22:06.062193  113871 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1017 19:22:06.062223  113871 cni.go:84] Creating CNI manager for ""
	I1017 19:22:06.062263  113871 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1017 19:22:06.062275  113871 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1017 19:22:06.062342  113871 start.go:349] cluster config:
	{Name:download-only-651643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-651643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:22:06.062455  113871 iso.go:125] acquiring lock: {Name:mk2487fdd858c1cb489b6312535f031f58d5b643 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:22:06.064326  113871 out.go:99] Starting "download-only-651643" primary control-plane node in "download-only-651643" cluster
	I1017 19:22:06.064347  113871 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:22:06.661872  113871 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1017 19:22:06.661916  113871 cache.go:58] Caching tarball of preloaded images
	I1017 19:22:06.662103  113871 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:22:06.664066  113871 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1017 19:22:06.664094  113871 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1017 19:22:06.765247  113871 preload.go:290] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1017 19:22:06.765306  113871 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/21664-109682/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-651643 host does not exist
	  To start a cluster, run: "minikube start -p download-only-651643"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.72s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-651643
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.68s)

                                                
                                                
=== RUN   TestBinaryMirror
I1017 19:22:20.154447  113592 binary.go:77] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-717550 --alsologtostderr --binary-mirror http://127.0.0.1:46365 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
helpers_test.go:175: Cleaning up "binary-mirror-717550" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-717550
--- PASS: TestBinaryMirror (0.68s)

                                                
                                    
x
+
TestOffline (100.4s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-257632 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-257632 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m39.511516623s)
helpers_test.go:175: Cleaning up "offline-crio-257632" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-257632
--- PASS: TestOffline (100.40s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-322722
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-322722: exit status 85 (53.190912ms)

                                                
                                                
-- stdout --
	* Profile "addons-322722" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-322722"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-322722
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-322722: exit status 85 (51.842228ms)

                                                
                                                
-- stdout --
	* Profile "addons-322722" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-322722"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (199.81s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-322722 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-322722 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m19.814409584s)
--- PASS: TestAddons/Setup (199.81s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-322722 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-322722 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.52s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-322722 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-322722 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [dbcabfe6-793a-4be4-85b5-9d1a4812477c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [dbcabfe6-793a-4be4-85b5-9d1a4812477c] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.005114477s
addons_test.go:694: (dbg) Run:  kubectl --context addons-322722 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-322722 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-322722 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.52s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 11.284449ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-n24pg" [98612915-1ff9-4ccb-a7d3-b957aed88735] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.030898773s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-7ntwn" [4004872e-0247-4c72-a17e-5ffef1c90027] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004227931s
addons_test.go:392: (dbg) Run:  kubectl --context addons-322722 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-322722 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-322722 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.655816045s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-322722 ip
2025/10/17 19:26:17 [DEBUG] GET http://192.168.39.86:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-322722 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.69s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.73s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 5.173223ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-322722
addons_test.go:332: (dbg) Run:  kubectl --context addons-322722 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-322722 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.73s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.35s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-gxw4w" [fb45b0c7-be11-4878-bd15-b67e53ad4770] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003046087s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-322722 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.35s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.2s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 11.577057ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-6xm4s" [b19952ba-7948-4378-80c6-cfeb5ca18fd6] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004912833s
addons_test.go:463: (dbg) Run:  kubectl --context addons-322722 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-322722 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-322722 addons disable metrics-server --alsologtostderr -v=1: (1.092946336s)
--- PASS: TestAddons/parallel/MetricsServer (6.20s)

                                                
                                    
x
+
TestAddons/parallel/CSI (45.87s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1017 19:26:18.814064  113592 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1017 19:26:18.820339  113592 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1017 19:26:18.820386  113592 kapi.go:107] duration metric: took 6.335053ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 6.356783ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-322722 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-322722 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-322722 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-322722 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-322722 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-322722 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-322722 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-322722 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-322722 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [057be301-0716-4cf9-b63e-88ff28b0b478] Pending
helpers_test.go:352: "task-pv-pod" [057be301-0716-4cf9-b63e-88ff28b0b478] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [057be301-0716-4cf9-b63e-88ff28b0b478] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.004869315s
addons_test.go:572: (dbg) Run:  kubectl --context addons-322722 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-322722 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-322722 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-322722 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-322722 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-322722 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-322722 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-322722 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-322722 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-322722 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-322722 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-322722 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-322722 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-322722 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [3f48096a-5c27-46b4-a48f-d42c31425d3a] Pending
helpers_test.go:352: "task-pv-pod-restore" [3f48096a-5c27-46b4-a48f-d42c31425d3a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [3f48096a-5c27-46b4-a48f-d42c31425d3a] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.005621828s
addons_test.go:614: (dbg) Run:  kubectl --context addons-322722 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-322722 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-322722 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-322722 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-322722 addons disable volumesnapshots --alsologtostderr -v=1: (1.011222379s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-322722 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-322722 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.016128027s)
--- PASS: TestAddons/parallel/CSI (45.87s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-322722 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-2nlxf" [77a3e526-6a6b-4093-b241-a8caff7ecd51] Pending
helpers_test.go:352: "headlamp-6945c6f4d-2nlxf" [77a3e526-6a6b-4093-b241-a8caff7ecd51] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-2nlxf" [77a3e526-6a6b-4093-b241-a8caff7ecd51] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.005712484s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-322722 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-322722 addons disable headlamp --alsologtostderr -v=1: (6.2180595s)
--- PASS: TestAddons/parallel/Headlamp (20.13s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.96s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-gstn6" [1fe93ada-fc96-4392-912c-2e56057446b8] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.274197915s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-322722 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.96s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.82s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-322722 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-322722 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-322722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-322722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-322722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-322722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-322722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-322722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-322722 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-322722 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [b7f8b5f6-1b89-4099-a1a2-89b1fd5a20fd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [b7f8b5f6-1b89-4099-a1a2-89b1fd5a20fd] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [b7f8b5f6-1b89-4099-a1a2-89b1fd5a20fd] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 7.00899004s
addons_test.go:967: (dbg) Run:  kubectl --context addons-322722 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-322722 ssh "cat /opt/local-path-provisioner/pvc-693455d1-f7f2-4ada-abe5-ab11ca9f9218_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-322722 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-322722 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-322722 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-322722 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.924336785s)
--- PASS: TestAddons/parallel/LocalPath (57.82s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (7.15s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-5b7p4" [c3831f56-865d-4bf2-bc81-2e3f4aeab7c2] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.043107604s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-322722 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-322722 addons disable nvidia-device-plugin --alsologtostderr -v=1: (1.109236154s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (7.15s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-9hmkj" [2ad469b1-24e8-4beb-95b0-010c06fa838e] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00397119s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-322722 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-322722 addons disable yakd --alsologtostderr -v=1: (6.199006422s)
--- PASS: TestAddons/parallel/Yakd (12.20s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (86.65s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-322722
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-322722: (1m26.360622001s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-322722
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-322722
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-322722
--- PASS: TestAddons/StoppedEnableDisable (86.65s)

                                                
                                    
x
+
TestCertOptions (85.4s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-092834 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-092834 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m24.019091601s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-092834 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-092834 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-092834 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-092834" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-092834
--- PASS: TestCertOptions (85.40s)

                                                
                                    
x
+
TestCertExpiration (295.03s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-292976 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-292976 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (59.689274239s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-292976 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-292976 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (54.407803927s)
helpers_test.go:175: Cleaning up "cert-expiration-292976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-292976
--- PASS: TestCertExpiration (295.03s)

                                                
                                    
x
+
TestForceSystemdFlag (50.9s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-259956 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-259956 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (49.822024294s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-259956 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-259956" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-259956
--- PASS: TestForceSystemdFlag (50.90s)

                                                
                                    
x
+
TestForceSystemdEnv (41.04s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-281103 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-281103 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (38.969739079s)
helpers_test.go:175: Cleaning up "force-systemd-env-281103" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-281103
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-281103: (2.072733322s)
--- PASS: TestForceSystemdEnv (41.04s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.58s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1017 20:19:39.268974  113592 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1017 20:19:39.269144  113592 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate1839151650/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1017 20:19:39.301824  113592 install.go:163] /tmp/TestKVMDriverInstallOrUpdate1839151650/001/docker-machine-driver-kvm2 version is 1.1.1
W1017 20:19:39.301882  113592 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W1017 20:19:39.302023  113592 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1017 20:19:39.302079  113592 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1839151650/001/docker-machine-driver-kvm2
I1017 20:19:40.446573  113592 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate1839151650/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1017 20:19:40.464398  113592 install.go:163] /tmp/TestKVMDriverInstallOrUpdate1839151650/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (1.58s)

                                                
                                    
x
+
TestErrorSpam/setup (39.57s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-290050 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-290050 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1017 19:30:41.372750  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:30:41.379181  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:30:41.390573  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:30:41.412076  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:30:41.453532  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:30:41.535052  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:30:41.696662  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:30:42.018415  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:30:42.660534  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:30:43.942013  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:30:46.505054  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:30:51.626580  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:31:01.869593  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-290050 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-290050 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (39.566086147s)
--- PASS: TestErrorSpam/setup (39.57s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290050 --log_dir /tmp/nospam-290050 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290050 --log_dir /tmp/nospam-290050 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290050 --log_dir /tmp/nospam-290050 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.83s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290050 --log_dir /tmp/nospam-290050 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290050 --log_dir /tmp/nospam-290050 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290050 --log_dir /tmp/nospam-290050 status
--- PASS: TestErrorSpam/status (0.83s)

                                                
                                    
x
+
TestErrorSpam/pause (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290050 --log_dir /tmp/nospam-290050 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290050 --log_dir /tmp/nospam-290050 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290050 --log_dir /tmp/nospam-290050 pause
--- PASS: TestErrorSpam/pause (1.70s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290050 --log_dir /tmp/nospam-290050 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290050 --log_dir /tmp/nospam-290050 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290050 --log_dir /tmp/nospam-290050 unpause
--- PASS: TestErrorSpam/unpause (1.81s)

                                                
                                    
x
+
TestErrorSpam/stop (86.75s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290050 --log_dir /tmp/nospam-290050 stop
E1017 19:31:22.351181  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:32:03.314380  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-290050 --log_dir /tmp/nospam-290050 stop: (1m22.852186394s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290050 --log_dir /tmp/nospam-290050 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-290050 --log_dir /tmp/nospam-290050 stop: (2.016945439s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290050 --log_dir /tmp/nospam-290050 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-290050 --log_dir /tmp/nospam-290050 stop: (1.882867333s)
--- PASS: TestErrorSpam/stop (86.75s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21664-109682/.minikube/files/etc/test/nested/copy/113592/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (55.12s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-993605 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1017 19:33:25.238145  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-993605 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (55.116181491s)
--- PASS: TestFunctional/serial/StartWithProxy (55.12s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (31.99s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1017 19:33:34.051113  113592 config.go:182] Loaded profile config "functional-993605": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-993605 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-993605 --alsologtostderr -v=8: (31.987427494s)
functional_test.go:678: soft start took 31.988168239s for "functional-993605" cluster.
I1017 19:34:06.038884  113592 config.go:182] Loaded profile config "functional-993605": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (31.99s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-993605 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.94s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-993605 cache add registry.k8s.io/pause:3.1: (1.611309256s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-993605 cache add registry.k8s.io/pause:3.3: (1.663979844s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-993605 cache add registry.k8s.io/pause:latest: (1.66724595s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.94s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-993605 /tmp/TestFunctionalserialCacheCmdcacheadd_local633674553/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 cache add minikube-local-cache-test:functional-993605
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-993605 cache add minikube-local-cache-test:functional-993605: (2.285480028s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 cache delete minikube-local-cache-test:functional-993605
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-993605
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-993605 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (223.42867ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-993605 cache reload: (1.494340772s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 kubectl -- --context functional-993605 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-993605 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.76s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-993605 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-993605 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.759429809s)
functional_test.go:776: restart took 31.759587065s for "functional-993605" cluster.
I1017 19:34:48.397315  113592 config.go:182] Loaded profile config "functional-993605": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (31.76s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-993605 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-993605 logs: (1.486412258s)
--- PASS: TestFunctional/serial/LogsCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 logs --file /tmp/TestFunctionalserialLogsFileCmd3344856667/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-993605 logs --file /tmp/TestFunctionalserialLogsFileCmd3344856667/001/logs.txt: (1.472845474s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.13s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-993605 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-993605
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-993605: exit status 115 (291.331073ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.105:31891 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-993605 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.13s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-993605 config get cpus: exit status 14 (61.333551ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-993605 config get cpus: exit status 14 (52.658081ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (25.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-993605 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-993605 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 122162: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (25.45s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-993605 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-993605 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (148.346395ms)

                                                
                                                
-- stdout --
	* [functional-993605] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21664-109682/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-109682/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:35:09.093746  121806 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:35:09.094110  121806 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:35:09.094124  121806 out.go:374] Setting ErrFile to fd 2...
	I1017 19:35:09.094130  121806 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:35:09.094467  121806 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-109682/.minikube/bin
	I1017 19:35:09.095173  121806 out.go:368] Setting JSON to false
	I1017 19:35:09.096492  121806 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4650,"bootTime":1760725059,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:35:09.096575  121806 start.go:141] virtualization: kvm guest
	I1017 19:35:09.098807  121806 out.go:179] * [functional-993605] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 19:35:09.100206  121806 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 19:35:09.100244  121806 notify.go:220] Checking for updates...
	I1017 19:35:09.102539  121806 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:35:09.103681  121806 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-109682/kubeconfig
	I1017 19:35:09.104703  121806 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-109682/.minikube
	I1017 19:35:09.105748  121806 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 19:35:09.108124  121806 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:35:09.110383  121806 config.go:182] Loaded profile config "functional-993605": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:35:09.110984  121806 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:35:09.111051  121806 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:35:09.126662  121806 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39033
	I1017 19:35:09.127304  121806 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:35:09.127925  121806 main.go:141] libmachine: Using API Version  1
	I1017 19:35:09.127958  121806 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:35:09.128381  121806 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:35:09.128583  121806 main.go:141] libmachine: (functional-993605) Calling .DriverName
	I1017 19:35:09.128890  121806 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:35:09.129251  121806 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:35:09.129298  121806 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:35:09.144454  121806 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34747
	I1017 19:35:09.145065  121806 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:35:09.145579  121806 main.go:141] libmachine: Using API Version  1
	I1017 19:35:09.145608  121806 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:35:09.146010  121806 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:35:09.146224  121806 main.go:141] libmachine: (functional-993605) Calling .DriverName
	I1017 19:35:09.181005  121806 out.go:179] * Using the kvm2 driver based on existing profile
	I1017 19:35:09.182544  121806 start.go:305] selected driver: kvm2
	I1017 19:35:09.182563  121806 start.go:925] validating driver "kvm2" against &{Name:functional-993605 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-993605 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.105 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:35:09.182701  121806 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:35:09.184876  121806 out.go:203] 
	W1017 19:35:09.186189  121806 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1017 19:35:09.187679  121806 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-993605 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
--- PASS: TestFunctional/parallel/DryRun (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-993605 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-993605 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (172.333353ms)

                                                
                                                
-- stdout --
	* [functional-993605] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21664-109682/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-109682/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:35:09.394008  121891 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:35:09.394346  121891 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:35:09.394360  121891 out.go:374] Setting ErrFile to fd 2...
	I1017 19:35:09.394367  121891 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:35:09.394729  121891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-109682/.minikube/bin
	I1017 19:35:09.395222  121891 out.go:368] Setting JSON to false
	I1017 19:35:09.396129  121891 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4650,"bootTime":1760725059,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:35:09.396229  121891 start.go:141] virtualization: kvm guest
	I1017 19:35:09.398276  121891 out.go:179] * [functional-993605] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1017 19:35:09.399908  121891 notify.go:220] Checking for updates...
	I1017 19:35:09.399964  121891 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 19:35:09.401336  121891 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:35:09.402796  121891 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-109682/kubeconfig
	I1017 19:35:09.404011  121891 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-109682/.minikube
	I1017 19:35:09.405250  121891 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 19:35:09.406443  121891 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:35:09.408394  121891 config.go:182] Loaded profile config "functional-993605": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:35:09.409033  121891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:35:09.409107  121891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:35:09.424338  121891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35781
	I1017 19:35:09.428430  121891 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:35:09.429250  121891 main.go:141] libmachine: Using API Version  1
	I1017 19:35:09.429297  121891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:35:09.429686  121891 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:35:09.429935  121891 main.go:141] libmachine: (functional-993605) Calling .DriverName
	I1017 19:35:09.430280  121891 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:35:09.430770  121891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:35:09.430819  121891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:35:09.448768  121891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45161
	I1017 19:35:09.449386  121891 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:35:09.449893  121891 main.go:141] libmachine: Using API Version  1
	I1017 19:35:09.449948  121891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:35:09.450340  121891 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:35:09.450524  121891 main.go:141] libmachine: (functional-993605) Calling .DriverName
	I1017 19:35:09.500783  121891 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1017 19:35:09.502336  121891 start.go:305] selected driver: kvm2
	I1017 19:35:09.502358  121891 start.go:925] validating driver "kvm2" against &{Name:functional-993605 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-993605 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.105 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:35:09.502493  121891 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:35:09.505917  121891 out.go:203] 
	W1017 19:35:09.507086  121891 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1017 19:35:09.511380  121891 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-993605 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-993605 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-m2rbx" [5e790ec4-b6df-424c-b453-bc4d77ae8a6e] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-m2rbx" [5e790ec4-b6df-424c-b453-bc4d77ae8a6e] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.005978616s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.105:31046
functional_test.go:1680: http://192.168.39.105:31046: success! body:
Request served by hello-node-connect-7d85dfc575-m2rbx

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.105:31046
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.63s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (48.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [bc861159-f25b-463c-93e8-a7e789bafd6c] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.002928512s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-993605 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-993605 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-993605 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-993605 apply -f testdata/storage-provisioner/pod.yaml
I1017 19:35:02.385897  113592 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [3db40e01-0740-4dc4-ad3f-6101c14deeab] Pending
helpers_test.go:352: "sp-pod" [3db40e01-0740-4dc4-ad3f-6101c14deeab] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [3db40e01-0740-4dc4-ad3f-6101c14deeab] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.005151749s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-993605 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-993605 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-993605 delete -f testdata/storage-provisioner/pod.yaml: (2.242667068s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-993605 apply -f testdata/storage-provisioner/pod.yaml
I1017 19:35:20.192447  113592 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [82e86b6d-209c-4a70-ada0-ad9993ca2cf0] Pending
helpers_test.go:352: "sp-pod" [82e86b6d-209c-4a70-ada0-ad9993ca2cf0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [82e86b6d-209c-4a70-ada0-ad9993ca2cf0] Running
E1017 19:35:41.364651  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 24.004464401s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-993605 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (48.32s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 ssh -n functional-993605 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 cp functional-993605:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd144716800/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 ssh -n functional-993605 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 ssh -n functional-993605 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-993605 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-l7wn2" [f1439e04-884e-40aa-a3a0-ec08ca082a62] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-l7wn2" [f1439e04-884e-40aa-a3a0-ec08ca082a62] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.007267841s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-993605 exec mysql-5bb876957f-l7wn2 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-993605 exec mysql-5bb876957f-l7wn2 -- mysql -ppassword -e "show databases;": exit status 1 (171.310696ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1017 19:35:30.155823  113592 retry.go:31] will retry after 964.387634ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-993605 exec mysql-5bb876957f-l7wn2 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-993605 exec mysql-5bb876957f-l7wn2 -- mysql -ppassword -e "show databases;": exit status 1 (173.928764ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1017 19:35:31.295403  113592 retry.go:31] will retry after 1.565197203s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-993605 exec mysql-5bb876957f-l7wn2 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.79s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/113592/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 ssh "sudo cat /etc/test/nested/copy/113592/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/113592.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 ssh "sudo cat /etc/ssl/certs/113592.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/113592.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 ssh "sudo cat /usr/share/ca-certificates/113592.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/1135922.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 ssh "sudo cat /etc/ssl/certs/1135922.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/1135922.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 ssh "sudo cat /usr/share/ca-certificates/1135922.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-993605 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-993605 ssh "sudo systemctl is-active docker": exit status 1 (198.153517ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-993605 ssh "sudo systemctl is-active containerd": exit status 1 (202.35225ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-993605 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-993605 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-rbnx5" [d2b9b501-cf7c-4a19-a21b-ebdc402e40b9] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-rbnx5" [d2b9b501-cf7c-4a19-a21b-ebdc402e40b9] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.005272247s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.21s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-993605 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-993605
localhost/kicbase/echo-server:functional-993605
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-993605 image ls --format short --alsologtostderr:
I1017 19:35:33.877121  122877 out.go:360] Setting OutFile to fd 1 ...
I1017 19:35:33.877406  122877 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:35:33.877417  122877 out.go:374] Setting ErrFile to fd 2...
I1017 19:35:33.877420  122877 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:35:33.877673  122877 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-109682/.minikube/bin
I1017 19:35:33.878329  122877 config.go:182] Loaded profile config "functional-993605": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 19:35:33.878452  122877 config.go:182] Loaded profile config "functional-993605": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 19:35:33.878960  122877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1017 19:35:33.879056  122877 main.go:141] libmachine: Launching plugin server for driver kvm2
I1017 19:35:33.893103  122877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41361
I1017 19:35:33.893639  122877 main.go:141] libmachine: () Calling .GetVersion
I1017 19:35:33.894376  122877 main.go:141] libmachine: Using API Version  1
I1017 19:35:33.894404  122877 main.go:141] libmachine: () Calling .SetConfigRaw
I1017 19:35:33.894776  122877 main.go:141] libmachine: () Calling .GetMachineName
I1017 19:35:33.895017  122877 main.go:141] libmachine: (functional-993605) Calling .GetState
I1017 19:35:33.897430  122877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1017 19:35:33.897481  122877 main.go:141] libmachine: Launching plugin server for driver kvm2
I1017 19:35:33.911040  122877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37355
I1017 19:35:33.911564  122877 main.go:141] libmachine: () Calling .GetVersion
I1017 19:35:33.912198  122877 main.go:141] libmachine: Using API Version  1
I1017 19:35:33.912232  122877 main.go:141] libmachine: () Calling .SetConfigRaw
I1017 19:35:33.912602  122877 main.go:141] libmachine: () Calling .GetMachineName
I1017 19:35:33.912831  122877 main.go:141] libmachine: (functional-993605) Calling .DriverName
I1017 19:35:33.913077  122877 ssh_runner.go:195] Run: systemctl --version
I1017 19:35:33.913106  122877 main.go:141] libmachine: (functional-993605) Calling .GetSSHHostname
I1017 19:35:33.916493  122877 main.go:141] libmachine: (functional-993605) DBG | domain functional-993605 has defined MAC address 52:54:00:1f:bb:01 in network mk-functional-993605
I1017 19:35:33.916987  122877 main.go:141] libmachine: (functional-993605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:bb:01", ip: ""} in network mk-functional-993605: {Iface:virbr1 ExpiryTime:2025-10-17 20:32:54 +0000 UTC Type:0 Mac:52:54:00:1f:bb:01 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:functional-993605 Clientid:01:52:54:00:1f:bb:01}
I1017 19:35:33.917045  122877 main.go:141] libmachine: (functional-993605) DBG | domain functional-993605 has defined IP address 192.168.39.105 and MAC address 52:54:00:1f:bb:01 in network mk-functional-993605
I1017 19:35:33.917187  122877 main.go:141] libmachine: (functional-993605) Calling .GetSSHPort
I1017 19:35:33.917384  122877 main.go:141] libmachine: (functional-993605) Calling .GetSSHKeyPath
I1017 19:35:33.917608  122877 main.go:141] libmachine: (functional-993605) Calling .GetSSHUsername
I1017 19:35:33.917779  122877 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/functional-993605/id_rsa Username:docker}
I1017 19:35:34.022365  122877 ssh_runner.go:195] Run: sudo crictl images --output json
I1017 19:35:34.078292  122877 main.go:141] libmachine: Making call to close driver server
I1017 19:35:34.078310  122877 main.go:141] libmachine: (functional-993605) Calling .Close
I1017 19:35:34.078580  122877 main.go:141] libmachine: Successfully made call to close driver server
I1017 19:35:34.078600  122877 main.go:141] libmachine: Making call to close connection to plugin binary
I1017 19:35:34.078609  122877 main.go:141] libmachine: Making call to close driver server
I1017 19:35:34.078613  122877 main.go:141] libmachine: (functional-993605) DBG | Closing plugin on server side
I1017 19:35:34.078618  122877 main.go:141] libmachine: (functional-993605) Calling .Close
I1017 19:35:34.078943  122877 main.go:141] libmachine: (functional-993605) DBG | Closing plugin on server side
I1017 19:35:34.078951  122877 main.go:141] libmachine: Successfully made call to close driver server
I1017 19:35:34.078984  122877 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-993605 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-993605  │ 9056ab77afb8e │ 4.94MB │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ docker.io/library/nginx                 │ latest             │ 07ccdb7838758 │ 164MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/minikube-local-cache-test     │ functional-993605  │ 030f524f4c046 │ 3.33kB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-993605 image ls --format table --alsologtostderr:
I1017 19:35:34.541526  123000 out.go:360] Setting OutFile to fd 1 ...
I1017 19:35:34.541775  123000 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:35:34.541785  123000 out.go:374] Setting ErrFile to fd 2...
I1017 19:35:34.541789  123000 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:35:34.542009  123000 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-109682/.minikube/bin
I1017 19:35:34.542594  123000 config.go:182] Loaded profile config "functional-993605": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 19:35:34.542684  123000 config.go:182] Loaded profile config "functional-993605": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 19:35:34.543074  123000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1017 19:35:34.543166  123000 main.go:141] libmachine: Launching plugin server for driver kvm2
I1017 19:35:34.557345  123000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39517
I1017 19:35:34.557943  123000 main.go:141] libmachine: () Calling .GetVersion
I1017 19:35:34.558479  123000 main.go:141] libmachine: Using API Version  1
I1017 19:35:34.558502  123000 main.go:141] libmachine: () Calling .SetConfigRaw
I1017 19:35:34.558936  123000 main.go:141] libmachine: () Calling .GetMachineName
I1017 19:35:34.559198  123000 main.go:141] libmachine: (functional-993605) Calling .GetState
I1017 19:35:34.561365  123000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1017 19:35:34.561409  123000 main.go:141] libmachine: Launching plugin server for driver kvm2
I1017 19:35:34.575789  123000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42277
I1017 19:35:34.576233  123000 main.go:141] libmachine: () Calling .GetVersion
I1017 19:35:34.576669  123000 main.go:141] libmachine: Using API Version  1
I1017 19:35:34.576692  123000 main.go:141] libmachine: () Calling .SetConfigRaw
I1017 19:35:34.577075  123000 main.go:141] libmachine: () Calling .GetMachineName
I1017 19:35:34.577274  123000 main.go:141] libmachine: (functional-993605) Calling .DriverName
I1017 19:35:34.577549  123000 ssh_runner.go:195] Run: systemctl --version
I1017 19:35:34.577586  123000 main.go:141] libmachine: (functional-993605) Calling .GetSSHHostname
I1017 19:35:34.581203  123000 main.go:141] libmachine: (functional-993605) DBG | domain functional-993605 has defined MAC address 52:54:00:1f:bb:01 in network mk-functional-993605
I1017 19:35:34.581729  123000 main.go:141] libmachine: (functional-993605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:bb:01", ip: ""} in network mk-functional-993605: {Iface:virbr1 ExpiryTime:2025-10-17 20:32:54 +0000 UTC Type:0 Mac:52:54:00:1f:bb:01 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:functional-993605 Clientid:01:52:54:00:1f:bb:01}
I1017 19:35:34.581755  123000 main.go:141] libmachine: (functional-993605) DBG | domain functional-993605 has defined IP address 192.168.39.105 and MAC address 52:54:00:1f:bb:01 in network mk-functional-993605
I1017 19:35:34.581923  123000 main.go:141] libmachine: (functional-993605) Calling .GetSSHPort
I1017 19:35:34.582122  123000 main.go:141] libmachine: (functional-993605) Calling .GetSSHKeyPath
I1017 19:35:34.582294  123000 main.go:141] libmachine: (functional-993605) Calling .GetSSHUsername
I1017 19:35:34.582530  123000 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/functional-993605/id_rsa Username:docker}
I1017 19:35:34.669559  123000 ssh_runner.go:195] Run: sudo crictl images --output json
I1017 19:35:34.713170  123000 main.go:141] libmachine: Making call to close driver server
I1017 19:35:34.713192  123000 main.go:141] libmachine: (functional-993605) Calling .Close
I1017 19:35:34.713494  123000 main.go:141] libmachine: Successfully made call to close driver server
I1017 19:35:34.713516  123000 main.go:141] libmachine: Making call to close connection to plugin binary
I1017 19:35:34.713519  123000 main.go:141] libmachine: (functional-993605) DBG | Closing plugin on server side
I1017 19:35:34.713523  123000 main.go:141] libmachine: Making call to close driver server
I1017 19:35:34.713638  123000 main.go:141] libmachine: (functional-993605) Calling .Close
I1017 19:35:34.713915  123000 main.go:141] libmachine: Successfully made call to close driver server
I1017 19:35:34.713928  123000 main.go:141] libmachine: Making call to close connection to plugin binary
2025/10/17 19:35:35 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-993605 image ls --format json --alsologtostderr:
[{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.i
o/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"030f524f4c0466ab7b02ec689fff18447255ee52c20145e3f16e45024b68588e","repoDigests":["localhost/minikube-local-cache-test@sha256:33b09bbd82d8daf6bad56bc82aee1e06014c5930b5c3ae701a4a875a903b3293"],"repoTags":["localhost/minikube-local-cache-test:functional-993605"],"size":"3330"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube
-apiserver:v1.34.1"],"size":"89046001"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938","repoDigests":["docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115","docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6"],"repoTags":["docker.io/library/nginx:latest"],"size":"163615579"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c
59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-993605"],"size":"4944818"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c26
2566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},
{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020
a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-993605 image ls --format json --alsologtostderr:
I1017 19:35:34.309161  122952 out.go:360] Setting OutFile to fd 1 ...
I1017 19:35:34.309428  122952 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:35:34.309439  122952 out.go:374] Setting ErrFile to fd 2...
I1017 19:35:34.309443  122952 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:35:34.309660  122952 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-109682/.minikube/bin
I1017 19:35:34.310285  122952 config.go:182] Loaded profile config "functional-993605": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 19:35:34.310379  122952 config.go:182] Loaded profile config "functional-993605": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 19:35:34.310738  122952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1017 19:35:34.310803  122952 main.go:141] libmachine: Launching plugin server for driver kvm2
I1017 19:35:34.327575  122952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36643
I1017 19:35:34.328208  122952 main.go:141] libmachine: () Calling .GetVersion
I1017 19:35:34.328771  122952 main.go:141] libmachine: Using API Version  1
I1017 19:35:34.328794  122952 main.go:141] libmachine: () Calling .SetConfigRaw
I1017 19:35:34.329170  122952 main.go:141] libmachine: () Calling .GetMachineName
I1017 19:35:34.329362  122952 main.go:141] libmachine: (functional-993605) Calling .GetState
I1017 19:35:34.331682  122952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1017 19:35:34.331742  122952 main.go:141] libmachine: Launching plugin server for driver kvm2
I1017 19:35:34.346632  122952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42035
I1017 19:35:34.347121  122952 main.go:141] libmachine: () Calling .GetVersion
I1017 19:35:34.347752  122952 main.go:141] libmachine: Using API Version  1
I1017 19:35:34.347786  122952 main.go:141] libmachine: () Calling .SetConfigRaw
I1017 19:35:34.348293  122952 main.go:141] libmachine: () Calling .GetMachineName
I1017 19:35:34.348486  122952 main.go:141] libmachine: (functional-993605) Calling .DriverName
I1017 19:35:34.348712  122952 ssh_runner.go:195] Run: systemctl --version
I1017 19:35:34.348744  122952 main.go:141] libmachine: (functional-993605) Calling .GetSSHHostname
I1017 19:35:34.352007  122952 main.go:141] libmachine: (functional-993605) DBG | domain functional-993605 has defined MAC address 52:54:00:1f:bb:01 in network mk-functional-993605
I1017 19:35:34.352549  122952 main.go:141] libmachine: (functional-993605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:bb:01", ip: ""} in network mk-functional-993605: {Iface:virbr1 ExpiryTime:2025-10-17 20:32:54 +0000 UTC Type:0 Mac:52:54:00:1f:bb:01 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:functional-993605 Clientid:01:52:54:00:1f:bb:01}
I1017 19:35:34.352581  122952 main.go:141] libmachine: (functional-993605) DBG | domain functional-993605 has defined IP address 192.168.39.105 and MAC address 52:54:00:1f:bb:01 in network mk-functional-993605
I1017 19:35:34.352951  122952 main.go:141] libmachine: (functional-993605) Calling .GetSSHPort
I1017 19:35:34.353119  122952 main.go:141] libmachine: (functional-993605) Calling .GetSSHKeyPath
I1017 19:35:34.353263  122952 main.go:141] libmachine: (functional-993605) Calling .GetSSHUsername
I1017 19:35:34.353365  122952 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/functional-993605/id_rsa Username:docker}
I1017 19:35:34.437663  122952 ssh_runner.go:195] Run: sudo crictl images --output json
I1017 19:35:34.479205  122952 main.go:141] libmachine: Making call to close driver server
I1017 19:35:34.479226  122952 main.go:141] libmachine: (functional-993605) Calling .Close
I1017 19:35:34.479589  122952 main.go:141] libmachine: (functional-993605) DBG | Closing plugin on server side
I1017 19:35:34.479661  122952 main.go:141] libmachine: Successfully made call to close driver server
I1017 19:35:34.479709  122952 main.go:141] libmachine: Making call to close connection to plugin binary
I1017 19:35:34.479725  122952 main.go:141] libmachine: Making call to close driver server
I1017 19:35:34.479736  122952 main.go:141] libmachine: (functional-993605) Calling .Close
I1017 19:35:34.480007  122952 main.go:141] libmachine: (functional-993605) DBG | Closing plugin on server side
I1017 19:35:34.480027  122952 main.go:141] libmachine: Successfully made call to close driver server
I1017 19:35:34.480040  122952 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-993605 image ls --format yaml --alsologtostderr:
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 030f524f4c0466ab7b02ec689fff18447255ee52c20145e3f16e45024b68588e
repoDigests:
- localhost/minikube-local-cache-test@sha256:33b09bbd82d8daf6bad56bc82aee1e06014c5930b5c3ae701a4a875a903b3293
repoTags:
- localhost/minikube-local-cache-test:functional-993605
size: "3330"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938
repoDigests:
- docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115
- docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6
repoTags:
- docker.io/library/nginx:latest
size: "163615579"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-993605
size: "4944818"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-993605 image ls --format yaml --alsologtostderr:
I1017 19:35:34.061543  122900 out.go:360] Setting OutFile to fd 1 ...
I1017 19:35:34.061869  122900 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:35:34.061882  122900 out.go:374] Setting ErrFile to fd 2...
I1017 19:35:34.061889  122900 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:35:34.062189  122900 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-109682/.minikube/bin
I1017 19:35:34.063119  122900 config.go:182] Loaded profile config "functional-993605": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 19:35:34.063273  122900 config.go:182] Loaded profile config "functional-993605": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 19:35:34.063911  122900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1017 19:35:34.064019  122900 main.go:141] libmachine: Launching plugin server for driver kvm2
I1017 19:35:34.078790  122900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41099
I1017 19:35:34.079327  122900 main.go:141] libmachine: () Calling .GetVersion
I1017 19:35:34.079970  122900 main.go:141] libmachine: Using API Version  1
I1017 19:35:34.080004  122900 main.go:141] libmachine: () Calling .SetConfigRaw
I1017 19:35:34.080479  122900 main.go:141] libmachine: () Calling .GetMachineName
I1017 19:35:34.080798  122900 main.go:141] libmachine: (functional-993605) Calling .GetState
I1017 19:35:34.083221  122900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1017 19:35:34.083280  122900 main.go:141] libmachine: Launching plugin server for driver kvm2
I1017 19:35:34.099633  122900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33455
I1017 19:35:34.100189  122900 main.go:141] libmachine: () Calling .GetVersion
I1017 19:35:34.100728  122900 main.go:141] libmachine: Using API Version  1
I1017 19:35:34.100767  122900 main.go:141] libmachine: () Calling .SetConfigRaw
I1017 19:35:34.101215  122900 main.go:141] libmachine: () Calling .GetMachineName
I1017 19:35:34.101415  122900 main.go:141] libmachine: (functional-993605) Calling .DriverName
I1017 19:35:34.101647  122900 ssh_runner.go:195] Run: systemctl --version
I1017 19:35:34.101684  122900 main.go:141] libmachine: (functional-993605) Calling .GetSSHHostname
I1017 19:35:34.105662  122900 main.go:141] libmachine: (functional-993605) DBG | domain functional-993605 has defined MAC address 52:54:00:1f:bb:01 in network mk-functional-993605
I1017 19:35:34.106317  122900 main.go:141] libmachine: (functional-993605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:bb:01", ip: ""} in network mk-functional-993605: {Iface:virbr1 ExpiryTime:2025-10-17 20:32:54 +0000 UTC Type:0 Mac:52:54:00:1f:bb:01 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:functional-993605 Clientid:01:52:54:00:1f:bb:01}
I1017 19:35:34.106349  122900 main.go:141] libmachine: (functional-993605) DBG | domain functional-993605 has defined IP address 192.168.39.105 and MAC address 52:54:00:1f:bb:01 in network mk-functional-993605
I1017 19:35:34.106597  122900 main.go:141] libmachine: (functional-993605) Calling .GetSSHPort
I1017 19:35:34.106879  122900 main.go:141] libmachine: (functional-993605) Calling .GetSSHKeyPath
I1017 19:35:34.107061  122900 main.go:141] libmachine: (functional-993605) Calling .GetSSHUsername
I1017 19:35:34.107258  122900 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/functional-993605/id_rsa Username:docker}
I1017 19:35:34.208965  122900 ssh_runner.go:195] Run: sudo crictl images --output json
I1017 19:35:34.254708  122900 main.go:141] libmachine: Making call to close driver server
I1017 19:35:34.254721  122900 main.go:141] libmachine: (functional-993605) Calling .Close
I1017 19:35:34.255011  122900 main.go:141] libmachine: Successfully made call to close driver server
I1017 19:35:34.255028  122900 main.go:141] libmachine: Making call to close connection to plugin binary
I1017 19:35:34.255036  122900 main.go:141] libmachine: Making call to close driver server
I1017 19:35:34.255042  122900 main.go:141] libmachine: (functional-993605) Calling .Close
I1017 19:35:34.255286  122900 main.go:141] libmachine: Successfully made call to close driver server
I1017 19:35:34.255305  122900 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-993605 ssh pgrep buildkitd: exit status 1 (227.398342ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 image build -t localhost/my-image:functional-993605 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-993605 image build -t localhost/my-image:functional-993605 testdata/build --alsologtostderr: (3.474893492s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-993605 image build -t localhost/my-image:functional-993605 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 4b391a43e97
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-993605
--> 47f3e59e11f
Successfully tagged localhost/my-image:functional-993605
47f3e59e11f038b69affcf6953eb8d22f51319ab9e82df8cda224fce4a87b297
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-993605 image build -t localhost/my-image:functional-993605 testdata/build --alsologtostderr:
I1017 19:35:34.364992  122965 out.go:360] Setting OutFile to fd 1 ...
I1017 19:35:34.365299  122965 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:35:34.365311  122965 out.go:374] Setting ErrFile to fd 2...
I1017 19:35:34.365315  122965 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:35:34.365561  122965 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-109682/.minikube/bin
I1017 19:35:34.366461  122965 config.go:182] Loaded profile config "functional-993605": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 19:35:34.367286  122965 config.go:182] Loaded profile config "functional-993605": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 19:35:34.367707  122965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1017 19:35:34.367756  122965 main.go:141] libmachine: Launching plugin server for driver kvm2
I1017 19:35:34.382409  122965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46445
I1017 19:35:34.382974  122965 main.go:141] libmachine: () Calling .GetVersion
I1017 19:35:34.383593  122965 main.go:141] libmachine: Using API Version  1
I1017 19:35:34.383620  122965 main.go:141] libmachine: () Calling .SetConfigRaw
I1017 19:35:34.384040  122965 main.go:141] libmachine: () Calling .GetMachineName
I1017 19:35:34.384301  122965 main.go:141] libmachine: (functional-993605) Calling .GetState
I1017 19:35:34.386428  122965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1017 19:35:34.386487  122965 main.go:141] libmachine: Launching plugin server for driver kvm2
I1017 19:35:34.400568  122965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37295
I1017 19:35:34.401112  122965 main.go:141] libmachine: () Calling .GetVersion
I1017 19:35:34.401593  122965 main.go:141] libmachine: Using API Version  1
I1017 19:35:34.401618  122965 main.go:141] libmachine: () Calling .SetConfigRaw
I1017 19:35:34.402046  122965 main.go:141] libmachine: () Calling .GetMachineName
I1017 19:35:34.402289  122965 main.go:141] libmachine: (functional-993605) Calling .DriverName
I1017 19:35:34.402552  122965 ssh_runner.go:195] Run: systemctl --version
I1017 19:35:34.402590  122965 main.go:141] libmachine: (functional-993605) Calling .GetSSHHostname
I1017 19:35:34.405463  122965 main.go:141] libmachine: (functional-993605) DBG | domain functional-993605 has defined MAC address 52:54:00:1f:bb:01 in network mk-functional-993605
I1017 19:35:34.406041  122965 main.go:141] libmachine: (functional-993605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:bb:01", ip: ""} in network mk-functional-993605: {Iface:virbr1 ExpiryTime:2025-10-17 20:32:54 +0000 UTC Type:0 Mac:52:54:00:1f:bb:01 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:functional-993605 Clientid:01:52:54:00:1f:bb:01}
I1017 19:35:34.406078  122965 main.go:141] libmachine: (functional-993605) DBG | domain functional-993605 has defined IP address 192.168.39.105 and MAC address 52:54:00:1f:bb:01 in network mk-functional-993605
I1017 19:35:34.406266  122965 main.go:141] libmachine: (functional-993605) Calling .GetSSHPort
I1017 19:35:34.406437  122965 main.go:141] libmachine: (functional-993605) Calling .GetSSHKeyPath
I1017 19:35:34.406622  122965 main.go:141] libmachine: (functional-993605) Calling .GetSSHUsername
I1017 19:35:34.406768  122965 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/functional-993605/id_rsa Username:docker}
I1017 19:35:34.492502  122965 build_images.go:161] Building image from path: /tmp/build.1926957565.tar
I1017 19:35:34.492555  122965 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1017 19:35:34.512307  122965 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1926957565.tar
I1017 19:35:34.518151  122965 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1926957565.tar: stat -c "%s %y" /var/lib/minikube/build/build.1926957565.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1926957565.tar': No such file or directory
I1017 19:35:34.518186  122965 ssh_runner.go:362] scp /tmp/build.1926957565.tar --> /var/lib/minikube/build/build.1926957565.tar (3072 bytes)
I1017 19:35:34.553391  122965 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1926957565
I1017 19:35:34.567372  122965 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1926957565 -xf /var/lib/minikube/build/build.1926957565.tar
I1017 19:35:34.582339  122965 crio.go:315] Building image: /var/lib/minikube/build/build.1926957565
I1017 19:35:34.582404  122965 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-993605 /var/lib/minikube/build/build.1926957565 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1017 19:35:37.752654  122965 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-993605 /var/lib/minikube/build/build.1926957565 --cgroup-manager=cgroupfs: (3.170214893s)
I1017 19:35:37.752749  122965 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1926957565
I1017 19:35:37.769956  122965 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1926957565.tar
I1017 19:35:37.782525  122965 build_images.go:217] Built localhost/my-image:functional-993605 from /tmp/build.1926957565.tar
I1017 19:35:37.782572  122965 build_images.go:133] succeeded building to: functional-993605
I1017 19:35:37.782579  122965 build_images.go:134] failed building to: 
I1017 19:35:37.782610  122965 main.go:141] libmachine: Making call to close driver server
I1017 19:35:37.782626  122965 main.go:141] libmachine: (functional-993605) Calling .Close
I1017 19:35:37.783004  122965 main.go:141] libmachine: (functional-993605) DBG | Closing plugin on server side
I1017 19:35:37.783007  122965 main.go:141] libmachine: Successfully made call to close driver server
I1017 19:35:37.783037  122965 main.go:141] libmachine: Making call to close connection to plugin binary
I1017 19:35:37.783046  122965 main.go:141] libmachine: Making call to close driver server
I1017 19:35:37.783054  122965 main.go:141] libmachine: (functional-993605) Calling .Close
I1017 19:35:37.783278  122965 main.go:141] libmachine: Successfully made call to close driver server
I1017 19:35:37.783296  122965 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.789021567s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-993605
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 image load --daemon kicbase/echo-server:functional-993605 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-993605 image load --daemon kicbase/echo-server:functional-993605 --alsologtostderr: (1.002780031s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 image load --daemon kicbase/echo-server:functional-993605 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-993605
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 image load --daemon kicbase/echo-server:functional-993605 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 image save kicbase/echo-server:functional-993605 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 image rm kicbase/echo-server:functional-993605 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-993605
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 image save --daemon kicbase/echo-server:functional-993605 --alsologtostderr
functional_test.go:439: (dbg) Done: out/minikube-linux-amd64 -p functional-993605 image save --daemon kicbase/echo-server:functional-993605 --alsologtostderr: (2.36641795s)
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-993605
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 service list -o json
functional_test.go:1504: Took "331.052307ms" to run "out/minikube-linux-amd64 -p functional-993605 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.105:32758
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.105:32758
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (21.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-993605 /tmp/TestFunctionalparallelMountCmdany-port2434892866/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760729708464661194" to /tmp/TestFunctionalparallelMountCmdany-port2434892866/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760729708464661194" to /tmp/TestFunctionalparallelMountCmdany-port2434892866/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760729708464661194" to /tmp/TestFunctionalparallelMountCmdany-port2434892866/001/test-1760729708464661194
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-993605 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (334.738325ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1017 19:35:08.799805  113592 retry.go:31] will retry after 651.273181ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 17 19:35 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 17 19:35 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 17 19:35 test-1760729708464661194
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 ssh cat /mount-9p/test-1760729708464661194
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-993605 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [7bdda06a-2d9f-44f6-8020-202cca8698fe] Pending
helpers_test.go:352: "busybox-mount" [7bdda06a-2d9f-44f6-8020-202cca8698fe] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [7bdda06a-2d9f-44f6-8020-202cca8698fe] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [7bdda06a-2d9f-44f6-8020-202cca8698fe] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 18.017362486s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-993605 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-993605 /tmp/TestFunctionalparallelMountCmdany-port2434892866/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (21.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "483.475784ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "59.89666ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "354.691229ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "60.678024ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-993605 /tmp/TestFunctionalparallelMountCmdspecific-port228958314/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-993605 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (264.098141ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1017 19:35:29.984428  113592 retry.go:31] will retry after 718.096273ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-993605 /tmp/TestFunctionalparallelMountCmdspecific-port228958314/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-993605 ssh "sudo umount -f /mount-9p": exit status 1 (284.005585ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-993605 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-993605 /tmp/TestFunctionalparallelMountCmdspecific-port228958314/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-993605 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2815053229/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-993605 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2815053229/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-993605 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2815053229/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-993605 ssh "findmnt -T" /mount1: exit status 1 (368.212457ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1017 19:35:32.209274  113592 retry.go:31] will retry after 479.06459ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-993605 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-993605 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-993605 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2815053229/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-993605 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2815053229/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-993605 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2815053229/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.69s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-993605
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-993605
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-993605
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (200.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1017 19:36:09.079695  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-152903 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (3m19.860056659s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (200.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-152903 kubectl -- rollout status deployment/busybox: (4.625463282s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 kubectl -- exec busybox-7b57f96db7-2jxtz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 kubectl -- exec busybox-7b57f96db7-ltf5m -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 kubectl -- exec busybox-7b57f96db7-xwxp9 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 kubectl -- exec busybox-7b57f96db7-2jxtz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 kubectl -- exec busybox-7b57f96db7-ltf5m -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 kubectl -- exec busybox-7b57f96db7-xwxp9 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 kubectl -- exec busybox-7b57f96db7-2jxtz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 kubectl -- exec busybox-7b57f96db7-ltf5m -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 kubectl -- exec busybox-7b57f96db7-xwxp9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 kubectl -- exec busybox-7b57f96db7-2jxtz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 kubectl -- exec busybox-7b57f96db7-2jxtz -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 kubectl -- exec busybox-7b57f96db7-ltf5m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 kubectl -- exec busybox-7b57f96db7-ltf5m -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 kubectl -- exec busybox-7b57f96db7-xwxp9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 kubectl -- exec busybox-7b57f96db7-xwxp9 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (43.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 node add --alsologtostderr -v 5
E1017 19:39:55.766318  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/functional-993605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:39:55.772800  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/functional-993605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:39:55.784348  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/functional-993605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:39:55.805792  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/functional-993605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:39:55.847270  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/functional-993605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:39:55.928789  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/functional-993605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:39:56.090377  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/functional-993605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:39:56.412488  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/functional-993605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-152903 node add --alsologtostderr -v 5: (42.676914846s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 status --alsologtostderr -v 5
E1017 19:39:57.054093  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/functional-993605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (43.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-152903 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1017 19:39:58.336169  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/functional-993605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 cp testdata/cp-test.txt ha-152903:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 ssh -n ha-152903 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 cp ha-152903:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3863750807/001/cp-test_ha-152903.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 ssh -n ha-152903 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 cp ha-152903:/home/docker/cp-test.txt ha-152903-m02:/home/docker/cp-test_ha-152903_ha-152903-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 ssh -n ha-152903 "sudo cat /home/docker/cp-test.txt"
E1017 19:40:00.898538  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/functional-993605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 ssh -n ha-152903-m02 "sudo cat /home/docker/cp-test_ha-152903_ha-152903-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 cp ha-152903:/home/docker/cp-test.txt ha-152903-m03:/home/docker/cp-test_ha-152903_ha-152903-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 ssh -n ha-152903 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 ssh -n ha-152903-m03 "sudo cat /home/docker/cp-test_ha-152903_ha-152903-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 cp ha-152903:/home/docker/cp-test.txt ha-152903-m04:/home/docker/cp-test_ha-152903_ha-152903-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 ssh -n ha-152903 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 ssh -n ha-152903-m04 "sudo cat /home/docker/cp-test_ha-152903_ha-152903-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 cp testdata/cp-test.txt ha-152903-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 ssh -n ha-152903-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 cp ha-152903-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3863750807/001/cp-test_ha-152903-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 ssh -n ha-152903-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 cp ha-152903-m02:/home/docker/cp-test.txt ha-152903:/home/docker/cp-test_ha-152903-m02_ha-152903.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 ssh -n ha-152903-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 ssh -n ha-152903 "sudo cat /home/docker/cp-test_ha-152903-m02_ha-152903.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 cp ha-152903-m02:/home/docker/cp-test.txt ha-152903-m03:/home/docker/cp-test_ha-152903-m02_ha-152903-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 ssh -n ha-152903-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 ssh -n ha-152903-m03 "sudo cat /home/docker/cp-test_ha-152903-m02_ha-152903-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 cp ha-152903-m02:/home/docker/cp-test.txt ha-152903-m04:/home/docker/cp-test_ha-152903-m02_ha-152903-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 ssh -n ha-152903-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 ssh -n ha-152903-m04 "sudo cat /home/docker/cp-test_ha-152903-m02_ha-152903-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 cp testdata/cp-test.txt ha-152903-m03:/home/docker/cp-test.txt
E1017 19:40:06.020016  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/functional-993605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 ssh -n ha-152903-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 cp ha-152903-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3863750807/001/cp-test_ha-152903-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 ssh -n ha-152903-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 cp ha-152903-m03:/home/docker/cp-test.txt ha-152903:/home/docker/cp-test_ha-152903-m03_ha-152903.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 ssh -n ha-152903-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 ssh -n ha-152903 "sudo cat /home/docker/cp-test_ha-152903-m03_ha-152903.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 cp ha-152903-m03:/home/docker/cp-test.txt ha-152903-m02:/home/docker/cp-test_ha-152903-m03_ha-152903-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 ssh -n ha-152903-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 ssh -n ha-152903-m02 "sudo cat /home/docker/cp-test_ha-152903-m03_ha-152903-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 cp ha-152903-m03:/home/docker/cp-test.txt ha-152903-m04:/home/docker/cp-test_ha-152903-m03_ha-152903-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 ssh -n ha-152903-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 ssh -n ha-152903-m04 "sudo cat /home/docker/cp-test_ha-152903-m03_ha-152903-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 cp testdata/cp-test.txt ha-152903-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 ssh -n ha-152903-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 cp ha-152903-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3863750807/001/cp-test_ha-152903-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 ssh -n ha-152903-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 cp ha-152903-m04:/home/docker/cp-test.txt ha-152903:/home/docker/cp-test_ha-152903-m04_ha-152903.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 ssh -n ha-152903-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 ssh -n ha-152903 "sudo cat /home/docker/cp-test_ha-152903-m04_ha-152903.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 cp ha-152903-m04:/home/docker/cp-test.txt ha-152903-m02:/home/docker/cp-test_ha-152903-m04_ha-152903-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 ssh -n ha-152903-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 ssh -n ha-152903-m02 "sudo cat /home/docker/cp-test_ha-152903-m04_ha-152903-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 cp ha-152903-m04:/home/docker/cp-test.txt ha-152903-m03:/home/docker/cp-test_ha-152903-m04_ha-152903-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 ssh -n ha-152903-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 ssh -n ha-152903-m03 "sudo cat /home/docker/cp-test_ha-152903-m04_ha-152903-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (81.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 node stop m02 --alsologtostderr -v 5
E1017 19:40:16.261623  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/functional-993605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:40:36.743535  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/functional-993605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:40:41.367543  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:41:17.706822  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/functional-993605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-152903 node stop m02 --alsologtostderr -v 5: (1m20.866461246s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-152903 status --alsologtostderr -v 5: exit status 7 (697.565463ms)

                                                
                                                
-- stdout --
	ha-152903
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-152903-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-152903-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-152903-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:41:33.222789  127595 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:41:33.222918  127595 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:41:33.222927  127595 out.go:374] Setting ErrFile to fd 2...
	I1017 19:41:33.222931  127595 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:41:33.223142  127595 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-109682/.minikube/bin
	I1017 19:41:33.223336  127595 out.go:368] Setting JSON to false
	I1017 19:41:33.223365  127595 mustload.go:65] Loading cluster: ha-152903
	I1017 19:41:33.223495  127595 notify.go:220] Checking for updates...
	I1017 19:41:33.223743  127595 config.go:182] Loaded profile config "ha-152903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:41:33.223763  127595 status.go:174] checking status of ha-152903 ...
	I1017 19:41:33.224203  127595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:41:33.224247  127595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:41:33.246819  127595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44927
	I1017 19:41:33.247600  127595 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:41:33.248314  127595 main.go:141] libmachine: Using API Version  1
	I1017 19:41:33.248337  127595 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:41:33.248786  127595 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:41:33.249048  127595 main.go:141] libmachine: (ha-152903) Calling .GetState
	I1017 19:41:33.251250  127595 status.go:371] ha-152903 host status = "Running" (err=<nil>)
	I1017 19:41:33.251267  127595 host.go:66] Checking if "ha-152903" exists ...
	I1017 19:41:33.251576  127595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:41:33.251623  127595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:41:33.265147  127595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32881
	I1017 19:41:33.265603  127595 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:41:33.266132  127595 main.go:141] libmachine: Using API Version  1
	I1017 19:41:33.266159  127595 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:41:33.266551  127595 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:41:33.266772  127595 main.go:141] libmachine: (ha-152903) Calling .GetIP
	I1017 19:41:33.270363  127595 main.go:141] libmachine: (ha-152903) DBG | domain ha-152903 has defined MAC address 52:54:00:23:04:02 in network mk-ha-152903
	I1017 19:41:33.271058  127595 main.go:141] libmachine: (ha-152903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:04:02", ip: ""} in network mk-ha-152903: {Iface:virbr1 ExpiryTime:2025-10-17 20:36:00 +0000 UTC Type:0 Mac:52:54:00:23:04:02 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-152903 Clientid:01:52:54:00:23:04:02}
	I1017 19:41:33.271095  127595 main.go:141] libmachine: (ha-152903) DBG | domain ha-152903 has defined IP address 192.168.39.13 and MAC address 52:54:00:23:04:02 in network mk-ha-152903
	I1017 19:41:33.271281  127595 host.go:66] Checking if "ha-152903" exists ...
	I1017 19:41:33.271613  127595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:41:33.271664  127595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:41:33.285367  127595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46627
	I1017 19:41:33.285835  127595 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:41:33.286373  127595 main.go:141] libmachine: Using API Version  1
	I1017 19:41:33.286405  127595 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:41:33.286796  127595 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:41:33.287080  127595 main.go:141] libmachine: (ha-152903) Calling .DriverName
	I1017 19:41:33.287292  127595 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:41:33.287325  127595 main.go:141] libmachine: (ha-152903) Calling .GetSSHHostname
	I1017 19:41:33.290735  127595 main.go:141] libmachine: (ha-152903) DBG | domain ha-152903 has defined MAC address 52:54:00:23:04:02 in network mk-ha-152903
	I1017 19:41:33.291293  127595 main.go:141] libmachine: (ha-152903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:04:02", ip: ""} in network mk-ha-152903: {Iface:virbr1 ExpiryTime:2025-10-17 20:36:00 +0000 UTC Type:0 Mac:52:54:00:23:04:02 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:ha-152903 Clientid:01:52:54:00:23:04:02}
	I1017 19:41:33.291314  127595 main.go:141] libmachine: (ha-152903) DBG | domain ha-152903 has defined IP address 192.168.39.13 and MAC address 52:54:00:23:04:02 in network mk-ha-152903
	I1017 19:41:33.291629  127595 main.go:141] libmachine: (ha-152903) Calling .GetSSHPort
	I1017 19:41:33.291876  127595 main.go:141] libmachine: (ha-152903) Calling .GetSSHKeyPath
	I1017 19:41:33.292042  127595 main.go:141] libmachine: (ha-152903) Calling .GetSSHUsername
	I1017 19:41:33.292206  127595 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/ha-152903/id_rsa Username:docker}
	I1017 19:41:33.377159  127595 ssh_runner.go:195] Run: systemctl --version
	I1017 19:41:33.384360  127595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:41:33.402073  127595 kubeconfig.go:125] found "ha-152903" server: "https://192.168.39.254:8443"
	I1017 19:41:33.402122  127595 api_server.go:166] Checking apiserver status ...
	I1017 19:41:33.402169  127595 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:41:33.422832  127595 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1441/cgroup
	W1017 19:41:33.435367  127595 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1441/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:41:33.435446  127595 ssh_runner.go:195] Run: ls
	I1017 19:41:33.442604  127595 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1017 19:41:33.447701  127595 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1017 19:41:33.447729  127595 status.go:463] ha-152903 apiserver status = Running (err=<nil>)
	I1017 19:41:33.447740  127595 status.go:176] ha-152903 status: &{Name:ha-152903 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:41:33.447760  127595 status.go:174] checking status of ha-152903-m02 ...
	I1017 19:41:33.448181  127595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:41:33.448230  127595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:41:33.462538  127595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39111
	I1017 19:41:33.463239  127595 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:41:33.463758  127595 main.go:141] libmachine: Using API Version  1
	I1017 19:41:33.463790  127595 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:41:33.464167  127595 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:41:33.464402  127595 main.go:141] libmachine: (ha-152903-m02) Calling .GetState
	I1017 19:41:33.466300  127595 status.go:371] ha-152903-m02 host status = "Stopped" (err=<nil>)
	I1017 19:41:33.466315  127595 status.go:384] host is not running, skipping remaining checks
	I1017 19:41:33.466321  127595 status.go:176] ha-152903-m02 status: &{Name:ha-152903-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:41:33.466339  127595 status.go:174] checking status of ha-152903-m03 ...
	I1017 19:41:33.466610  127595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:41:33.466657  127595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:41:33.480606  127595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45411
	I1017 19:41:33.481074  127595 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:41:33.481508  127595 main.go:141] libmachine: Using API Version  1
	I1017 19:41:33.481528  127595 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:41:33.481916  127595 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:41:33.482147  127595 main.go:141] libmachine: (ha-152903-m03) Calling .GetState
	I1017 19:41:33.484088  127595 status.go:371] ha-152903-m03 host status = "Running" (err=<nil>)
	I1017 19:41:33.484105  127595 host.go:66] Checking if "ha-152903-m03" exists ...
	I1017 19:41:33.484459  127595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:41:33.484528  127595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:41:33.500341  127595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42185
	I1017 19:41:33.500797  127595 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:41:33.501258  127595 main.go:141] libmachine: Using API Version  1
	I1017 19:41:33.501284  127595 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:41:33.501647  127595 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:41:33.501901  127595 main.go:141] libmachine: (ha-152903-m03) Calling .GetIP
	I1017 19:41:33.505921  127595 main.go:141] libmachine: (ha-152903-m03) DBG | domain ha-152903-m03 has defined MAC address 52:54:00:04:c3:be in network mk-ha-152903
	I1017 19:41:33.506464  127595 main.go:141] libmachine: (ha-152903-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c3:be", ip: ""} in network mk-ha-152903: {Iface:virbr1 ExpiryTime:2025-10-17 20:37:57 +0000 UTC Type:0 Mac:52:54:00:04:c3:be Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-152903-m03 Clientid:01:52:54:00:04:c3:be}
	I1017 19:41:33.506499  127595 main.go:141] libmachine: (ha-152903-m03) DBG | domain ha-152903-m03 has defined IP address 192.168.39.107 and MAC address 52:54:00:04:c3:be in network mk-ha-152903
	I1017 19:41:33.506940  127595 host.go:66] Checking if "ha-152903-m03" exists ...
	I1017 19:41:33.507414  127595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:41:33.507486  127595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:41:33.523289  127595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44443
	I1017 19:41:33.523837  127595 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:41:33.524314  127595 main.go:141] libmachine: Using API Version  1
	I1017 19:41:33.524343  127595 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:41:33.524747  127595 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:41:33.524992  127595 main.go:141] libmachine: (ha-152903-m03) Calling .DriverName
	I1017 19:41:33.525187  127595 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:41:33.525211  127595 main.go:141] libmachine: (ha-152903-m03) Calling .GetSSHHostname
	I1017 19:41:33.528841  127595 main.go:141] libmachine: (ha-152903-m03) DBG | domain ha-152903-m03 has defined MAC address 52:54:00:04:c3:be in network mk-ha-152903
	I1017 19:41:33.529464  127595 main.go:141] libmachine: (ha-152903-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c3:be", ip: ""} in network mk-ha-152903: {Iface:virbr1 ExpiryTime:2025-10-17 20:37:57 +0000 UTC Type:0 Mac:52:54:00:04:c3:be Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-152903-m03 Clientid:01:52:54:00:04:c3:be}
	I1017 19:41:33.529490  127595 main.go:141] libmachine: (ha-152903-m03) DBG | domain ha-152903-m03 has defined IP address 192.168.39.107 and MAC address 52:54:00:04:c3:be in network mk-ha-152903
	I1017 19:41:33.529786  127595 main.go:141] libmachine: (ha-152903-m03) Calling .GetSSHPort
	I1017 19:41:33.530029  127595 main.go:141] libmachine: (ha-152903-m03) Calling .GetSSHKeyPath
	I1017 19:41:33.530311  127595 main.go:141] libmachine: (ha-152903-m03) Calling .GetSSHUsername
	I1017 19:41:33.530503  127595 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/ha-152903-m03/id_rsa Username:docker}
	I1017 19:41:33.629079  127595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:41:33.651526  127595 kubeconfig.go:125] found "ha-152903" server: "https://192.168.39.254:8443"
	I1017 19:41:33.651557  127595 api_server.go:166] Checking apiserver status ...
	I1017 19:41:33.651593  127595 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:41:33.673809  127595 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1788/cgroup
	W1017 19:41:33.689394  127595 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1788/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:41:33.689473  127595 ssh_runner.go:195] Run: ls
	I1017 19:41:33.695631  127595 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1017 19:41:33.701599  127595 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1017 19:41:33.701628  127595 status.go:463] ha-152903-m03 apiserver status = Running (err=<nil>)
	I1017 19:41:33.701642  127595 status.go:176] ha-152903-m03 status: &{Name:ha-152903-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:41:33.701665  127595 status.go:174] checking status of ha-152903-m04 ...
	I1017 19:41:33.702017  127595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:41:33.702076  127595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:41:33.717418  127595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42603
	I1017 19:41:33.717906  127595 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:41:33.718455  127595 main.go:141] libmachine: Using API Version  1
	I1017 19:41:33.718484  127595 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:41:33.718956  127595 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:41:33.719181  127595 main.go:141] libmachine: (ha-152903-m04) Calling .GetState
	I1017 19:41:33.721349  127595 status.go:371] ha-152903-m04 host status = "Running" (err=<nil>)
	I1017 19:41:33.721370  127595 host.go:66] Checking if "ha-152903-m04" exists ...
	I1017 19:41:33.721881  127595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:41:33.721958  127595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:41:33.735803  127595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33067
	I1017 19:41:33.736336  127595 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:41:33.736772  127595 main.go:141] libmachine: Using API Version  1
	I1017 19:41:33.736798  127595 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:41:33.737126  127595 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:41:33.737335  127595 main.go:141] libmachine: (ha-152903-m04) Calling .GetIP
	I1017 19:41:33.740767  127595 main.go:141] libmachine: (ha-152903-m04) DBG | domain ha-152903-m04 has defined MAC address 52:54:00:7a:98:cb in network mk-ha-152903
	I1017 19:41:33.741282  127595 main.go:141] libmachine: (ha-152903-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:cb", ip: ""} in network mk-ha-152903: {Iface:virbr1 ExpiryTime:2025-10-17 20:39:30 +0000 UTC Type:0 Mac:52:54:00:7a:98:cb Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-152903-m04 Clientid:01:52:54:00:7a:98:cb}
	I1017 19:41:33.741308  127595 main.go:141] libmachine: (ha-152903-m04) DBG | domain ha-152903-m04 has defined IP address 192.168.39.125 and MAC address 52:54:00:7a:98:cb in network mk-ha-152903
	I1017 19:41:33.741579  127595 host.go:66] Checking if "ha-152903-m04" exists ...
	I1017 19:41:33.741894  127595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:41:33.741946  127595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:41:33.756955  127595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42751
	I1017 19:41:33.757462  127595 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:41:33.758052  127595 main.go:141] libmachine: Using API Version  1
	I1017 19:41:33.758080  127595 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:41:33.758436  127595 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:41:33.758641  127595 main.go:141] libmachine: (ha-152903-m04) Calling .DriverName
	I1017 19:41:33.758799  127595 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:41:33.758819  127595 main.go:141] libmachine: (ha-152903-m04) Calling .GetSSHHostname
	I1017 19:41:33.762308  127595 main.go:141] libmachine: (ha-152903-m04) DBG | domain ha-152903-m04 has defined MAC address 52:54:00:7a:98:cb in network mk-ha-152903
	I1017 19:41:33.762803  127595 main.go:141] libmachine: (ha-152903-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:98:cb", ip: ""} in network mk-ha-152903: {Iface:virbr1 ExpiryTime:2025-10-17 20:39:30 +0000 UTC Type:0 Mac:52:54:00:7a:98:cb Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-152903-m04 Clientid:01:52:54:00:7a:98:cb}
	I1017 19:41:33.762826  127595 main.go:141] libmachine: (ha-152903-m04) DBG | domain ha-152903-m04 has defined IP address 192.168.39.125 and MAC address 52:54:00:7a:98:cb in network mk-ha-152903
	I1017 19:41:33.763173  127595 main.go:141] libmachine: (ha-152903-m04) Calling .GetSSHPort
	I1017 19:41:33.763387  127595 main.go:141] libmachine: (ha-152903-m04) Calling .GetSSHKeyPath
	I1017 19:41:33.763592  127595 main.go:141] libmachine: (ha-152903-m04) Calling .GetSSHUsername
	I1017 19:41:33.763867  127595 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/ha-152903-m04/id_rsa Username:docker}
	I1017 19:41:33.847485  127595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:41:33.867634  127595 status.go:176] ha-152903-m04 status: &{Name:ha-152903-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (81.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (42.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-152903 node start m02 --alsologtostderr -v 5: (41.784926132s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-152903 status --alsologtostderr -v 5: (1.112396689s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (42.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.166211178s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (390.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 stop --alsologtostderr -v 5
E1017 19:42:39.630650  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/functional-993605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:44:55.765188  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/functional-993605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:45:23.473148  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/functional-993605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:45:41.363777  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-152903 stop --alsologtostderr -v 5: (4m20.722039299s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 start --wait true --alsologtostderr -v 5
E1017 19:47:04.441709  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-152903 start --wait true --alsologtostderr -v 5: (2m9.470138225s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (390.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-152903 node delete m03 --alsologtostderr -v 5: (17.811490582s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (268.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 stop --alsologtostderr -v 5
E1017 19:49:55.765130  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/functional-993605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:50:41.364147  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-152903 stop --alsologtostderr -v 5: (4m28.004705043s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-152903 status --alsologtostderr -v 5: exit status 7 (113.488312ms)

                                                
                                                
-- stdout --
	ha-152903
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-152903-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-152903-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:53:36.416251  131520 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:53:36.416530  131520 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:53:36.416541  131520 out.go:374] Setting ErrFile to fd 2...
	I1017 19:53:36.416546  131520 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:53:36.416751  131520 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-109682/.minikube/bin
	I1017 19:53:36.416950  131520 out.go:368] Setting JSON to false
	I1017 19:53:36.416980  131520 mustload.go:65] Loading cluster: ha-152903
	I1017 19:53:36.417044  131520 notify.go:220] Checking for updates...
	I1017 19:53:36.417521  131520 config.go:182] Loaded profile config "ha-152903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:53:36.417545  131520 status.go:174] checking status of ha-152903 ...
	I1017 19:53:36.418092  131520 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:53:36.418151  131520 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:53:36.440229  131520 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36705
	I1017 19:53:36.440770  131520 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:53:36.441466  131520 main.go:141] libmachine: Using API Version  1
	I1017 19:53:36.441493  131520 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:53:36.441901  131520 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:53:36.442147  131520 main.go:141] libmachine: (ha-152903) Calling .GetState
	I1017 19:53:36.444144  131520 status.go:371] ha-152903 host status = "Stopped" (err=<nil>)
	I1017 19:53:36.444160  131520 status.go:384] host is not running, skipping remaining checks
	I1017 19:53:36.444166  131520 status.go:176] ha-152903 status: &{Name:ha-152903 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:53:36.444184  131520 status.go:174] checking status of ha-152903-m02 ...
	I1017 19:53:36.444471  131520 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:53:36.444516  131520 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:53:36.457956  131520 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46793
	I1017 19:53:36.458649  131520 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:53:36.459309  131520 main.go:141] libmachine: Using API Version  1
	I1017 19:53:36.459355  131520 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:53:36.459772  131520 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:53:36.460010  131520 main.go:141] libmachine: (ha-152903-m02) Calling .GetState
	I1017 19:53:36.461782  131520 status.go:371] ha-152903-m02 host status = "Stopped" (err=<nil>)
	I1017 19:53:36.461802  131520 status.go:384] host is not running, skipping remaining checks
	I1017 19:53:36.461809  131520 status.go:176] ha-152903-m02 status: &{Name:ha-152903-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:53:36.461831  131520 status.go:174] checking status of ha-152903-m04 ...
	I1017 19:53:36.462152  131520 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 19:53:36.462197  131520 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:53:36.475612  131520 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39123
	I1017 19:53:36.476059  131520 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:53:36.476490  131520 main.go:141] libmachine: Using API Version  1
	I1017 19:53:36.476508  131520 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:53:36.476931  131520 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:53:36.477104  131520 main.go:141] libmachine: (ha-152903-m04) Calling .GetState
	I1017 19:53:36.478899  131520 status.go:371] ha-152903-m04 host status = "Stopped" (err=<nil>)
	I1017 19:53:36.478916  131520 status.go:384] host is not running, skipping remaining checks
	I1017 19:53:36.478934  131520 status.go:176] ha-152903-m04 status: &{Name:ha-152903-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (268.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (88.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1017 19:54:55.765949  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/functional-993605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-152903 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m27.429281119s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (88.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (83.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 node add --control-plane --alsologtostderr -v 5
E1017 19:55:41.368055  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:56:18.835441  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/functional-993605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-152903 node add --control-plane --alsologtostderr -v 5: (1m22.713940575s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-152903 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (83.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.92s)

                                                
                                    
x
+
TestJSONOutput/start/Command (55.92s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-353890 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-353890 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (55.919414874s)
--- PASS: TestJSONOutput/start/Command (55.92s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-353890 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-353890 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.81s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-353890 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-353890 --output=json --user=testUser: (6.810010784s)
--- PASS: TestJSONOutput/stop/Command (6.81s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-840671 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-840671 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (67.002548ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b678f8db-a26a-45be-b9fb-50b27643f892","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-840671] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8420ed9a-d1ff-4bb6-b879-0428261ac634","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21664"}}
	{"specversion":"1.0","id":"5f9df6a6-3d69-4128-b9f7-6a0ae6139a5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"828e8a2a-bc4e-445c-812c-fb952c7d18bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21664-109682/kubeconfig"}}
	{"specversion":"1.0","id":"619a58d3-fef8-4c6d-8911-220ade9bb982","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-109682/.minikube"}}
	{"specversion":"1.0","id":"ae95944b-cbfb-4721-9d10-65e98a34652f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f80ac79b-9ae6-4b48-91dd-7cf4475bc4e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"45ae6cea-5a00-4856-8a8a-825dd3d2b24e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-840671" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-840671
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (79.92s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-889797 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-889797 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (37.836950591s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-893277 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-893277 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (39.212580144s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-889797
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-893277
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-893277" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-893277
helpers_test.go:175: Cleaning up "first-889797" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-889797
--- PASS: TestMinikubeProfile (79.92s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (23.86s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-106765 --memory=3072 --mount-string /tmp/TestMountStartserial2426308443/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-106765 --memory=3072 --mount-string /tmp/TestMountStartserial2426308443/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (22.864546323s)
--- PASS: TestMountStart/serial/StartWithMountFirst (23.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-106765 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-106765 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (20.95s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-127357 --memory=3072 --mount-string /tmp/TestMountStartserial2426308443/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-127357 --memory=3072 --mount-string /tmp/TestMountStartserial2426308443/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (19.947139532s)
--- PASS: TestMountStart/serial/StartWithMountSecond (20.95s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-127357 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-127357 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-106765 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-127357 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-127357 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-127357
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-127357: (1.260820604s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (19.78s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-127357
E1017 19:59:55.768379  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/functional-993605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-127357: (18.777082256s)
--- PASS: TestMountStart/serial/RestartStopped (19.78s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-127357 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-127357 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (100.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-395053 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1017 20:00:41.368962  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-395053 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m39.561167622s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (100.01s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-395053 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-395053 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-395053 -- rollout status deployment/busybox: (4.121070482s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-395053 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-395053 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-395053 -- exec busybox-7b57f96db7-gj8wg -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-395053 -- exec busybox-7b57f96db7-t2bbz -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-395053 -- exec busybox-7b57f96db7-gj8wg -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-395053 -- exec busybox-7b57f96db7-t2bbz -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-395053 -- exec busybox-7b57f96db7-gj8wg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-395053 -- exec busybox-7b57f96db7-t2bbz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.69s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-395053 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-395053 -- exec busybox-7b57f96db7-gj8wg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-395053 -- exec busybox-7b57f96db7-gj8wg -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-395053 -- exec busybox-7b57f96db7-t2bbz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-395053 -- exec busybox-7b57f96db7-t2bbz -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (46.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-395053 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-395053 -v=5 --alsologtostderr: (45.67765236s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (46.29s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-395053 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.62s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 cp testdata/cp-test.txt multinode-395053:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 ssh -n multinode-395053 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 cp multinode-395053:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2050588168/001/cp-test_multinode-395053.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 ssh -n multinode-395053 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 cp multinode-395053:/home/docker/cp-test.txt multinode-395053-m02:/home/docker/cp-test_multinode-395053_multinode-395053-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 ssh -n multinode-395053 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 ssh -n multinode-395053-m02 "sudo cat /home/docker/cp-test_multinode-395053_multinode-395053-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 cp multinode-395053:/home/docker/cp-test.txt multinode-395053-m03:/home/docker/cp-test_multinode-395053_multinode-395053-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 ssh -n multinode-395053 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 ssh -n multinode-395053-m03 "sudo cat /home/docker/cp-test_multinode-395053_multinode-395053-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 cp testdata/cp-test.txt multinode-395053-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 ssh -n multinode-395053-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 cp multinode-395053-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2050588168/001/cp-test_multinode-395053-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 ssh -n multinode-395053-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 cp multinode-395053-m02:/home/docker/cp-test.txt multinode-395053:/home/docker/cp-test_multinode-395053-m02_multinode-395053.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 ssh -n multinode-395053-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 ssh -n multinode-395053 "sudo cat /home/docker/cp-test_multinode-395053-m02_multinode-395053.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 cp multinode-395053-m02:/home/docker/cp-test.txt multinode-395053-m03:/home/docker/cp-test_multinode-395053-m02_multinode-395053-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 ssh -n multinode-395053-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 ssh -n multinode-395053-m03 "sudo cat /home/docker/cp-test_multinode-395053-m02_multinode-395053-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 cp testdata/cp-test.txt multinode-395053-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 ssh -n multinode-395053-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 cp multinode-395053-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2050588168/001/cp-test_multinode-395053-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 ssh -n multinode-395053-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 cp multinode-395053-m03:/home/docker/cp-test.txt multinode-395053:/home/docker/cp-test_multinode-395053-m03_multinode-395053.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 ssh -n multinode-395053-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 ssh -n multinode-395053 "sudo cat /home/docker/cp-test_multinode-395053-m03_multinode-395053.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 cp multinode-395053-m03:/home/docker/cp-test.txt multinode-395053-m02:/home/docker/cp-test_multinode-395053-m03_multinode-395053-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 ssh -n multinode-395053-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 ssh -n multinode-395053-m02 "sudo cat /home/docker/cp-test_multinode-395053-m03_multinode-395053-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.47s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-395053 node stop m03: (1.549153293s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-395053 status: exit status 7 (471.764901ms)

                                                
                                                
-- stdout --
	multinode-395053
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-395053-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-395053-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-395053 status --alsologtostderr: exit status 7 (456.704676ms)

                                                
                                                
-- stdout --
	multinode-395053
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-395053-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-395053-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:02:50.258054  139147 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:02:50.258173  139147 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:02:50.258185  139147 out.go:374] Setting ErrFile to fd 2...
	I1017 20:02:50.258191  139147 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:02:50.258434  139147 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-109682/.minikube/bin
	I1017 20:02:50.258627  139147 out.go:368] Setting JSON to false
	I1017 20:02:50.258660  139147 mustload.go:65] Loading cluster: multinode-395053
	I1017 20:02:50.258783  139147 notify.go:220] Checking for updates...
	I1017 20:02:50.259172  139147 config.go:182] Loaded profile config "multinode-395053": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:02:50.259191  139147 status.go:174] checking status of multinode-395053 ...
	I1017 20:02:50.259825  139147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 20:02:50.259889  139147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 20:02:50.274171  139147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36985
	I1017 20:02:50.274664  139147 main.go:141] libmachine: () Calling .GetVersion
	I1017 20:02:50.275314  139147 main.go:141] libmachine: Using API Version  1
	I1017 20:02:50.275351  139147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 20:02:50.275725  139147 main.go:141] libmachine: () Calling .GetMachineName
	I1017 20:02:50.275965  139147 main.go:141] libmachine: (multinode-395053) Calling .GetState
	I1017 20:02:50.277839  139147 status.go:371] multinode-395053 host status = "Running" (err=<nil>)
	I1017 20:02:50.277885  139147 host.go:66] Checking if "multinode-395053" exists ...
	I1017 20:02:50.278214  139147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 20:02:50.278256  139147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 20:02:50.292445  139147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43467
	I1017 20:02:50.293049  139147 main.go:141] libmachine: () Calling .GetVersion
	I1017 20:02:50.293636  139147 main.go:141] libmachine: Using API Version  1
	I1017 20:02:50.293668  139147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 20:02:50.294059  139147 main.go:141] libmachine: () Calling .GetMachineName
	I1017 20:02:50.294262  139147 main.go:141] libmachine: (multinode-395053) Calling .GetIP
	I1017 20:02:50.297535  139147 main.go:141] libmachine: (multinode-395053) DBG | domain multinode-395053 has defined MAC address 52:54:00:25:d0:6d in network mk-multinode-395053
	I1017 20:02:50.298063  139147 main.go:141] libmachine: (multinode-395053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:d0:6d", ip: ""} in network mk-multinode-395053: {Iface:virbr1 ExpiryTime:2025-10-17 21:00:22 +0000 UTC Type:0 Mac:52:54:00:25:d0:6d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:multinode-395053 Clientid:01:52:54:00:25:d0:6d}
	I1017 20:02:50.298093  139147 main.go:141] libmachine: (multinode-395053) DBG | domain multinode-395053 has defined IP address 192.168.39.234 and MAC address 52:54:00:25:d0:6d in network mk-multinode-395053
	I1017 20:02:50.298263  139147 host.go:66] Checking if "multinode-395053" exists ...
	I1017 20:02:50.298587  139147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 20:02:50.298629  139147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 20:02:50.312980  139147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42875
	I1017 20:02:50.313500  139147 main.go:141] libmachine: () Calling .GetVersion
	I1017 20:02:50.314036  139147 main.go:141] libmachine: Using API Version  1
	I1017 20:02:50.314059  139147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 20:02:50.314372  139147 main.go:141] libmachine: () Calling .GetMachineName
	I1017 20:02:50.314555  139147 main.go:141] libmachine: (multinode-395053) Calling .DriverName
	I1017 20:02:50.314754  139147 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:02:50.314792  139147 main.go:141] libmachine: (multinode-395053) Calling .GetSSHHostname
	I1017 20:02:50.318276  139147 main.go:141] libmachine: (multinode-395053) DBG | domain multinode-395053 has defined MAC address 52:54:00:25:d0:6d in network mk-multinode-395053
	I1017 20:02:50.318782  139147 main.go:141] libmachine: (multinode-395053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:d0:6d", ip: ""} in network mk-multinode-395053: {Iface:virbr1 ExpiryTime:2025-10-17 21:00:22 +0000 UTC Type:0 Mac:52:54:00:25:d0:6d Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:multinode-395053 Clientid:01:52:54:00:25:d0:6d}
	I1017 20:02:50.318821  139147 main.go:141] libmachine: (multinode-395053) DBG | domain multinode-395053 has defined IP address 192.168.39.234 and MAC address 52:54:00:25:d0:6d in network mk-multinode-395053
	I1017 20:02:50.319008  139147 main.go:141] libmachine: (multinode-395053) Calling .GetSSHPort
	I1017 20:02:50.319170  139147 main.go:141] libmachine: (multinode-395053) Calling .GetSSHKeyPath
	I1017 20:02:50.319314  139147 main.go:141] libmachine: (multinode-395053) Calling .GetSSHUsername
	I1017 20:02:50.319445  139147 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/multinode-395053/id_rsa Username:docker}
	I1017 20:02:50.408477  139147 ssh_runner.go:195] Run: systemctl --version
	I1017 20:02:50.415665  139147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:02:50.432562  139147 kubeconfig.go:125] found "multinode-395053" server: "https://192.168.39.234:8443"
	I1017 20:02:50.432597  139147 api_server.go:166] Checking apiserver status ...
	I1017 20:02:50.432629  139147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:02:50.451589  139147 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1352/cgroup
	W1017 20:02:50.463609  139147 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1352/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1017 20:02:50.463669  139147 ssh_runner.go:195] Run: ls
	I1017 20:02:50.468621  139147 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8443/healthz ...
	I1017 20:02:50.478113  139147 api_server.go:279] https://192.168.39.234:8443/healthz returned 200:
	ok
	I1017 20:02:50.478146  139147 status.go:463] multinode-395053 apiserver status = Running (err=<nil>)
	I1017 20:02:50.478172  139147 status.go:176] multinode-395053 status: &{Name:multinode-395053 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 20:02:50.478189  139147 status.go:174] checking status of multinode-395053-m02 ...
	I1017 20:02:50.478476  139147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 20:02:50.478520  139147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 20:02:50.493596  139147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33039
	I1017 20:02:50.494105  139147 main.go:141] libmachine: () Calling .GetVersion
	I1017 20:02:50.494549  139147 main.go:141] libmachine: Using API Version  1
	I1017 20:02:50.494570  139147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 20:02:50.495029  139147 main.go:141] libmachine: () Calling .GetMachineName
	I1017 20:02:50.495237  139147 main.go:141] libmachine: (multinode-395053-m02) Calling .GetState
	I1017 20:02:50.497146  139147 status.go:371] multinode-395053-m02 host status = "Running" (err=<nil>)
	I1017 20:02:50.497164  139147 host.go:66] Checking if "multinode-395053-m02" exists ...
	I1017 20:02:50.497437  139147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 20:02:50.497493  139147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 20:02:50.511359  139147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39371
	I1017 20:02:50.512041  139147 main.go:141] libmachine: () Calling .GetVersion
	I1017 20:02:50.512513  139147 main.go:141] libmachine: Using API Version  1
	I1017 20:02:50.512533  139147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 20:02:50.512895  139147 main.go:141] libmachine: () Calling .GetMachineName
	I1017 20:02:50.513095  139147 main.go:141] libmachine: (multinode-395053-m02) Calling .GetIP
	I1017 20:02:50.516503  139147 main.go:141] libmachine: (multinode-395053-m02) DBG | domain multinode-395053-m02 has defined MAC address 52:54:00:16:6b:ce in network mk-multinode-395053
	I1017 20:02:50.517018  139147 main.go:141] libmachine: (multinode-395053-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:6b:ce", ip: ""} in network mk-multinode-395053: {Iface:virbr1 ExpiryTime:2025-10-17 21:01:17 +0000 UTC Type:0 Mac:52:54:00:16:6b:ce Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:multinode-395053-m02 Clientid:01:52:54:00:16:6b:ce}
	I1017 20:02:50.517053  139147 main.go:141] libmachine: (multinode-395053-m02) DBG | domain multinode-395053-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:16:6b:ce in network mk-multinode-395053
	I1017 20:02:50.517209  139147 host.go:66] Checking if "multinode-395053-m02" exists ...
	I1017 20:02:50.517574  139147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 20:02:50.517628  139147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 20:02:50.533277  139147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34937
	I1017 20:02:50.533718  139147 main.go:141] libmachine: () Calling .GetVersion
	I1017 20:02:50.534217  139147 main.go:141] libmachine: Using API Version  1
	I1017 20:02:50.534251  139147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 20:02:50.534593  139147 main.go:141] libmachine: () Calling .GetMachineName
	I1017 20:02:50.534830  139147 main.go:141] libmachine: (multinode-395053-m02) Calling .DriverName
	I1017 20:02:50.535043  139147 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:02:50.535067  139147 main.go:141] libmachine: (multinode-395053-m02) Calling .GetSSHHostname
	I1017 20:02:50.537952  139147 main.go:141] libmachine: (multinode-395053-m02) DBG | domain multinode-395053-m02 has defined MAC address 52:54:00:16:6b:ce in network mk-multinode-395053
	I1017 20:02:50.538381  139147 main.go:141] libmachine: (multinode-395053-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:6b:ce", ip: ""} in network mk-multinode-395053: {Iface:virbr1 ExpiryTime:2025-10-17 21:01:17 +0000 UTC Type:0 Mac:52:54:00:16:6b:ce Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:multinode-395053-m02 Clientid:01:52:54:00:16:6b:ce}
	I1017 20:02:50.538415  139147 main.go:141] libmachine: (multinode-395053-m02) DBG | domain multinode-395053-m02 has defined IP address 192.168.39.101 and MAC address 52:54:00:16:6b:ce in network mk-multinode-395053
	I1017 20:02:50.538553  139147 main.go:141] libmachine: (multinode-395053-m02) Calling .GetSSHPort
	I1017 20:02:50.538731  139147 main.go:141] libmachine: (multinode-395053-m02) Calling .GetSSHKeyPath
	I1017 20:02:50.538923  139147 main.go:141] libmachine: (multinode-395053-m02) Calling .GetSSHUsername
	I1017 20:02:50.539102  139147 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-109682/.minikube/machines/multinode-395053-m02/id_rsa Username:docker}
	I1017 20:02:50.623711  139147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:02:50.642977  139147 status.go:176] multinode-395053-m02 status: &{Name:multinode-395053-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1017 20:02:50.643033  139147 status.go:174] checking status of multinode-395053-m03 ...
	I1017 20:02:50.643380  139147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 20:02:50.643434  139147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 20:02:50.659449  139147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45859
	I1017 20:02:50.659919  139147 main.go:141] libmachine: () Calling .GetVersion
	I1017 20:02:50.660381  139147 main.go:141] libmachine: Using API Version  1
	I1017 20:02:50.660424  139147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 20:02:50.660787  139147 main.go:141] libmachine: () Calling .GetMachineName
	I1017 20:02:50.661039  139147 main.go:141] libmachine: (multinode-395053-m03) Calling .GetState
	I1017 20:02:50.662794  139147 status.go:371] multinode-395053-m03 host status = "Stopped" (err=<nil>)
	I1017 20:02:50.662811  139147 status.go:384] host is not running, skipping remaining checks
	I1017 20:02:50.662819  139147 status.go:176] multinode-395053-m03 status: &{Name:multinode-395053-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.48s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-395053 node start m03 -v=5 --alsologtostderr: (37.940312108s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.62s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (298.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-395053
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-395053
E1017 20:03:44.445457  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:04:55.768596  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/functional-993605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:05:41.368376  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-395053: (2m52.039278413s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-395053 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-395053 --wait=true -v=5 --alsologtostderr: (2m6.763394068s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-395053
--- PASS: TestMultiNode/serial/RestartKeepsNodes (298.91s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-395053 node delete m03: (2.311956074s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.87s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (174.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 stop
E1017 20:09:55.769073  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/functional-993605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:10:41.368550  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-395053 stop: (2m54.40160723s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-395053 status: exit status 7 (96.044098ms)

                                                
                                                
-- stdout --
	multinode-395053
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-395053-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-395053 status --alsologtostderr: exit status 7 (86.450261ms)

                                                
                                                
-- stdout --
	multinode-395053
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-395053-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:11:25.611908  141875 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:11:25.612037  141875 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:11:25.612047  141875 out.go:374] Setting ErrFile to fd 2...
	I1017 20:11:25.612051  141875 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:11:25.612263  141875 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-109682/.minikube/bin
	I1017 20:11:25.612423  141875 out.go:368] Setting JSON to false
	I1017 20:11:25.612451  141875 mustload.go:65] Loading cluster: multinode-395053
	I1017 20:11:25.612529  141875 notify.go:220] Checking for updates...
	I1017 20:11:25.612794  141875 config.go:182] Loaded profile config "multinode-395053": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:11:25.612807  141875 status.go:174] checking status of multinode-395053 ...
	I1017 20:11:25.613275  141875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 20:11:25.613318  141875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 20:11:25.627725  141875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36331
	I1017 20:11:25.628318  141875 main.go:141] libmachine: () Calling .GetVersion
	I1017 20:11:25.629031  141875 main.go:141] libmachine: Using API Version  1
	I1017 20:11:25.629058  141875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 20:11:25.629464  141875 main.go:141] libmachine: () Calling .GetMachineName
	I1017 20:11:25.629679  141875 main.go:141] libmachine: (multinode-395053) Calling .GetState
	I1017 20:11:25.631774  141875 status.go:371] multinode-395053 host status = "Stopped" (err=<nil>)
	I1017 20:11:25.631789  141875 status.go:384] host is not running, skipping remaining checks
	I1017 20:11:25.631794  141875 status.go:176] multinode-395053 status: &{Name:multinode-395053 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 20:11:25.631813  141875 status.go:174] checking status of multinode-395053-m02 ...
	I1017 20:11:25.632292  141875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1017 20:11:25.632334  141875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 20:11:25.645796  141875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39311
	I1017 20:11:25.646301  141875 main.go:141] libmachine: () Calling .GetVersion
	I1017 20:11:25.646727  141875 main.go:141] libmachine: Using API Version  1
	I1017 20:11:25.646753  141875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 20:11:25.647079  141875 main.go:141] libmachine: () Calling .GetMachineName
	I1017 20:11:25.647238  141875 main.go:141] libmachine: (multinode-395053-m02) Calling .GetState
	I1017 20:11:25.649318  141875 status.go:371] multinode-395053-m02 host status = "Stopped" (err=<nil>)
	I1017 20:11:25.649341  141875 status.go:384] host is not running, skipping remaining checks
	I1017 20:11:25.649349  141875 status.go:176] multinode-395053-m02 status: &{Name:multinode-395053-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (174.58s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (95.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-395053 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1017 20:12:58.837335  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/functional-993605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-395053 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m34.504940207s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-395053 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (95.08s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (40.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-395053
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-395053-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-395053-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (65.99307ms)

                                                
                                                
-- stdout --
	* [multinode-395053-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21664-109682/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-109682/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-395053-m02' is duplicated with machine name 'multinode-395053-m02' in profile 'multinode-395053'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-395053-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-395053-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (38.86454668s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-395053
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-395053: exit status 80 (224.022855ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-395053 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-395053-m03 already exists in multinode-395053-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-395053-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (40.06s)

                                                
                                    
x
+
TestScheduledStopUnix (111.8s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-497321 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-497321 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (40.028111205s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-497321 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-497321 -n scheduled-stop-497321
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-497321 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1017 20:16:34.872270  113592 retry.go:31] will retry after 133.068µs: open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/scheduled-stop-497321/pid: no such file or directory
I1017 20:16:34.873472  113592 retry.go:31] will retry after 75.516µs: open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/scheduled-stop-497321/pid: no such file or directory
I1017 20:16:34.874636  113592 retry.go:31] will retry after 183.448µs: open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/scheduled-stop-497321/pid: no such file or directory
I1017 20:16:34.875808  113592 retry.go:31] will retry after 453.226µs: open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/scheduled-stop-497321/pid: no such file or directory
I1017 20:16:34.876967  113592 retry.go:31] will retry after 463.986µs: open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/scheduled-stop-497321/pid: no such file or directory
I1017 20:16:34.878108  113592 retry.go:31] will retry after 1.015519ms: open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/scheduled-stop-497321/pid: no such file or directory
I1017 20:16:34.879222  113592 retry.go:31] will retry after 1.149482ms: open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/scheduled-stop-497321/pid: no such file or directory
I1017 20:16:34.881430  113592 retry.go:31] will retry after 983.382µs: open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/scheduled-stop-497321/pid: no such file or directory
I1017 20:16:34.882549  113592 retry.go:31] will retry after 3.484371ms: open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/scheduled-stop-497321/pid: no such file or directory
I1017 20:16:34.886801  113592 retry.go:31] will retry after 3.911677ms: open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/scheduled-stop-497321/pid: no such file or directory
I1017 20:16:34.891007  113592 retry.go:31] will retry after 6.972226ms: open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/scheduled-stop-497321/pid: no such file or directory
I1017 20:16:34.898238  113592 retry.go:31] will retry after 8.642338ms: open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/scheduled-stop-497321/pid: no such file or directory
I1017 20:16:34.907521  113592 retry.go:31] will retry after 12.161428ms: open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/scheduled-stop-497321/pid: no such file or directory
I1017 20:16:34.920814  113592 retry.go:31] will retry after 19.961231ms: open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/scheduled-stop-497321/pid: no such file or directory
I1017 20:16:34.941100  113592 retry.go:31] will retry after 32.822968ms: open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/scheduled-stop-497321/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-497321 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-497321 -n scheduled-stop-497321
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-497321
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-497321 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-497321
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-497321: exit status 7 (78.194478ms)

                                                
                                                
-- stdout --
	scheduled-stop-497321
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-497321 -n scheduled-stop-497321
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-497321 -n scheduled-stop-497321: exit status 7 (74.279702ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-497321" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-497321
--- PASS: TestScheduledStopUnix (111.80s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (102.99s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2892543988 start -p running-upgrade-184033 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1017 20:20:41.364149  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2892543988 start -p running-upgrade-184033 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (51.969073123s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-184033 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-184033 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (47.096884359s)
helpers_test.go:175: Cleaning up "running-upgrade-184033" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-184033
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-184033: (1.292181847s)
--- PASS: TestRunningBinaryUpgrade (102.99s)

                                                
                                    
x
+
TestKubernetesUpgrade (160.3s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-402331 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-402331 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (59.577187728s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-402331
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-402331: (1.864969103s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-402331 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-402331 status --format={{.Host}}: exit status 7 (79.710593ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-402331 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-402331 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m3.560785195s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-402331 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-402331 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-402331 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 106 (107.370576ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-402331] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21664-109682/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-109682/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-402331
	    minikube start -p kubernetes-upgrade-402331 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4023312 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-402331 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-402331 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-402331 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (33.705122827s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-402331" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-402331
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-402331: (1.340682085s)
--- PASS: TestKubernetesUpgrade (160.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-273731 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-273731 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (90.657039ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-273731] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21664-109682/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-109682/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (103.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-273731 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-273731 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m42.943775748s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-273731 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (103.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-273731 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-273731 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (6.223755017s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-273731 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-273731 status -o json: exit status 2 (269.313254ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-273731","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-273731
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-269519 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-269519 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (118.466211ms)

                                                
                                                
-- stdout --
	* [false-269519] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21664-109682/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-109682/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:19:30.235316  147894 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:19:30.235650  147894 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:19:30.235663  147894 out.go:374] Setting ErrFile to fd 2...
	I1017 20:19:30.235670  147894 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:19:30.236014  147894 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-109682/.minikube/bin
	I1017 20:19:30.236667  147894 out.go:368] Setting JSON to false
	I1017 20:19:30.238058  147894 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7311,"bootTime":1760725059,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 20:19:30.238171  147894 start.go:141] virtualization: kvm guest
	I1017 20:19:30.241395  147894 out.go:179] * [false-269519] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 20:19:30.242989  147894 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 20:19:30.242991  147894 notify.go:220] Checking for updates...
	I1017 20:19:30.245424  147894 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:19:30.246699  147894 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-109682/kubeconfig
	I1017 20:19:30.248044  147894 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-109682/.minikube
	I1017 20:19:30.249410  147894 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 20:19:30.250709  147894 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:19:30.252762  147894 config.go:182] Loaded profile config "NoKubernetes-273731": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:19:30.252908  147894 config.go:182] Loaded profile config "cert-expiration-292976": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:19:30.253051  147894 config.go:182] Loaded profile config "cert-options-092834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:19:30.253192  147894 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:19:30.287685  147894 out.go:179] * Using the kvm2 driver based on user configuration
	I1017 20:19:30.289058  147894 start.go:305] selected driver: kvm2
	I1017 20:19:30.289077  147894 start.go:925] validating driver "kvm2" against <nil>
	I1017 20:19:30.289092  147894 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:19:30.290986  147894 out.go:203] 
	W1017 20:19:30.292185  147894 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1017 20:19:30.293291  147894 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-269519 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-269519

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-269519

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-269519

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-269519

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-269519

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-269519

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-269519

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-269519

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-269519

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-269519

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269519"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269519"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269519"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-269519

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269519"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269519"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-269519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-269519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-269519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-269519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-269519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-269519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-269519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-269519" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269519"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269519"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269519"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269519"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269519"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-269519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-269519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-269519" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269519"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269519"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269519"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269519"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269519"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21664-109682/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 17 Oct 2025 20:19:27 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.19:8443
name: NoKubernetes-273731
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21664-109682/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 17 Oct 2025 20:18:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.34:8443
name: cert-expiration-292976
contexts:
- context:
cluster: NoKubernetes-273731
extensions:
- extension:
last-update: Fri, 17 Oct 2025 20:19:27 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-273731
name: NoKubernetes-273731
- context:
cluster: cert-expiration-292976
extensions:
- extension:
last-update: Fri, 17 Oct 2025 20:18:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-292976
name: cert-expiration-292976
current-context: NoKubernetes-273731
kind: Config
users:
- name: NoKubernetes-273731
user:
client-certificate: /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/NoKubernetes-273731/client.crt
client-key: /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/NoKubernetes-273731/client.key
- name: cert-expiration-292976
user:
client-certificate: /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/cert-expiration-292976/client.crt
client-key: /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/cert-expiration-292976/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-269519

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269519"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269519"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269519"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269519"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269519"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269519"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269519"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269519"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269519"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269519"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269519"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269519"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269519"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269519"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269519"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269519"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269519"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269519"

                                                
                                                
----------------------- debugLogs end: false-269519 [took: 3.540411295s] --------------------------------
helpers_test.go:175: Cleaning up "false-269519" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-269519
--- PASS: TestNetworkPlugins/group/false (3.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (21.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-273731 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-273731 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (21.830467046s)
--- PASS: TestNoKubernetes/serial/Start (21.83s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.63s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.63s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (128.75s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2840931364 start -p stopped-upgrade-405283 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1017 20:19:55.764997  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/functional-993605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2840931364 start -p stopped-upgrade-405283 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m19.898669038s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2840931364 -p stopped-upgrade-405283 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2840931364 -p stopped-upgrade-405283 stop: (3.307044958s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-405283 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-405283 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (45.544341993s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (128.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-273731 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-273731 "sudo systemctl is-active --quiet service kubelet": exit status 1 (209.304411ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-273731
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-273731: (1.462034707s)
--- PASS: TestNoKubernetes/serial/Stop (1.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (35.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-273731 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1017 20:20:24.447638  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-273731 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (35.222837921s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (35.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-273731 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-273731 "sudo systemctl is-active --quiet service kubelet": exit status 1 (214.388562ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-405283
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-405283: (1.063138551s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.06s)

                                                
                                    
x
+
TestPause/serial/Start (60.65s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-218711 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-218711 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m0.654693255s)
--- PASS: TestPause/serial/Start (60.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (90.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-269519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-269519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m30.314618379s)
--- PASS: TestNetworkPlugins/group/auto/Start (90.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (91.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-269519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-269519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m31.939394257s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (91.94s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (65.63s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-218711 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-218711 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m5.603182884s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (65.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (101.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-269519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-269519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m41.534500802s)
--- PASS: TestNetworkPlugins/group/calico/Start (101.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-269519 "pgrep -a kubelet"
I1017 20:23:51.447843  113592 config.go:182] Loaded profile config "auto-269519": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-269519 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9gbgd" [dfcf3228-6370-49cc-8c33-3ac202a2b439] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-9gbgd" [dfcf3228-6370-49cc-8c33-3ac202a2b439] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.005835407s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-269519 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-269519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-269519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestPause/serial/Pause (0.95s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-218711 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-28js5" [f8fb98d5-9c87-485a-b1a4-01d3924b7e37] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006191978s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.29s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-218711 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-218711 --output=json --layout=cluster: exit status 2 (294.562555ms)

                                                
                                                
-- stdout --
	{"Name":"pause-218711","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-218711","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.29s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.79s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-218711 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.79s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.01s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-218711 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-218711 --alsologtostderr -v=5: (1.013419552s)
--- PASS: TestPause/serial/PauseAgain (1.01s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.93s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-218711 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.93s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (4.71s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.707448602s)
--- PASS: TestPause/serial/VerifyDeletedResources (4.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (71.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-269519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-269519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m11.866879284s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (71.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-269519 "pgrep -a kubelet"
I1017 20:24:19.471805  113592 config.go:182] Loaded profile config "kindnet-269519": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-269519 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7c45t" [002ca57e-fc6d-474c-bd98-c084e55bbf38] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-7c45t" [002ca57e-fc6d-474c-bd98-c084e55bbf38] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004905373s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (81.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-269519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-269519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m21.322235522s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (81.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-269519 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-269519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-269519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (85.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-269519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-269519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m25.392910287s)
--- PASS: TestNetworkPlugins/group/flannel/Start (85.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-cx7mx" [73c1feb6-f07c-4e7d-97b0-93b1fdeffef2] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
E1017 20:24:55.765087  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/functional-993605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004317249s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-269519 "pgrep -a kubelet"
I1017 20:24:59.467423  113592 config.go:182] Loaded profile config "calico-269519": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-269519 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-btfvk" [9d878210-e06a-4ca2-a3d0-68aca1921e21] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-btfvk" [9d878210-e06a-4ca2-a3d0-68aca1921e21] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004519203s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-269519 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-269519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-269519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (67.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-269519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-269519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m7.605781088s)
--- PASS: TestNetworkPlugins/group/bridge/Start (67.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-269519 "pgrep -a kubelet"
I1017 20:25:31.231269  113592 config.go:182] Loaded profile config "custom-flannel-269519": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-269519 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4hc2s" [ffdabed2-e80a-4454-84c8-13416bc6da36] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4hc2s" [ffdabed2-e80a-4454-84c8-13416bc6da36] Running
E1017 20:25:41.364535  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.005538795s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-269519 "pgrep -a kubelet"
I1017 20:25:42.783770  113592 config.go:182] Loaded profile config "enable-default-cni-269519": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-269519 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8pfwc" [66c7caa8-841f-453d-b567-b09fc47eee90] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8pfwc" [66c7caa8-841f-453d-b567-b09fc47eee90] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.00559074s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-269519 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-269519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-269519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-269519 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-269519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-269519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (67.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-902916 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-902916 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (1m7.23619619s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (67.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (95.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-140964 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-140964 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m35.970592737s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (95.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-qqr8j" [d2269545-eb7d-4998-8798-15553ca38244] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005069726s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-269519 "pgrep -a kubelet"
I1017 20:26:21.289652  113592 config.go:182] Loaded profile config "flannel-269519": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-269519 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ltp58" [4a74b14e-4bb7-4211-b247-480eb1779db5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ltp58" [4a74b14e-4bb7-4211-b247-480eb1779db5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004919647s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-269519 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-269519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-269519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-269519 "pgrep -a kubelet"
I1017 20:26:37.744384  113592 config.go:182] Loaded profile config "bridge-269519": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-269519 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-cp4dx" [dcfc853f-cce1-47f1-8ba1-44eaa362fb03] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-cp4dx" [dcfc853f-cce1-47f1-8ba1-44eaa362fb03] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.005267223s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-269519 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-269519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-269519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (63.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-141127 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-141127 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m3.426471s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (63.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (65.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-016497 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-016497 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m5.953709829s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (65.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-902916 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [14dd82c0-f26b-4704-b537-fd1011aedbfd] Pending
helpers_test.go:352: "busybox" [14dd82c0-f26b-4704-b537-fd1011aedbfd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [14dd82c0-f26b-4704-b537-fd1011aedbfd] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004287037s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-902916 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-902916 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-902916 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.199126738s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-902916 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (72.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-902916 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-902916 --alsologtostderr -v=3: (1m12.915732044s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (72.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-140964 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [29a70604-8739-4466-b1d4-22dd95bb5bf0] Pending
helpers_test.go:352: "busybox" [29a70604-8739-4466-b1d4-22dd95bb5bf0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [29a70604-8739-4466-b1d4-22dd95bb5bf0] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.004753325s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-140964 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-141127 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [223c00e7-7e22-4aa3-8153-187774ad337a] Pending
helpers_test.go:352: "busybox" [223c00e7-7e22-4aa3-8153-187774ad337a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [223c00e7-7e22-4aa3-8153-187774ad337a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.004285774s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-141127 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-140964 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-140964 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (89.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-140964 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-140964 --alsologtostderr -v=3: (1m29.101379972s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (89.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-141127 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-141127 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (84.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-141127 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-141127 --alsologtostderr -v=3: (1m24.591146384s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (84.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-016497 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3a71d56d-883b-4668-8c38-5b0a047775a0] Pending
helpers_test.go:352: "busybox" [3a71d56d-883b-4668-8c38-5b0a047775a0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3a71d56d-883b-4668-8c38-5b0a047775a0] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004689602s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-016497 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-016497 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-016497 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (82.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-016497 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-016497 --alsologtostderr -v=3: (1m22.336204899s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (82.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-902916 -n old-k8s-version-902916
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-902916 -n old-k8s-version-902916: exit status 7 (78.367756ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-902916 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (44.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-902916 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
E1017 20:28:51.722319  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/auto-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:28:51.728708  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/auto-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:28:51.740144  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/auto-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:28:51.761617  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/auto-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:28:51.803093  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/auto-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:28:51.884612  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/auto-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:28:52.045977  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/auto-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:28:52.367877  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/auto-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:28:53.009799  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/auto-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:28:54.291121  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/auto-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:28:56.852742  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/auto-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:29:01.974543  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/auto-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:29:12.216832  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/auto-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:29:13.232806  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/kindnet-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:29:13.239269  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/kindnet-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:29:13.250754  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/kindnet-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:29:13.272182  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/kindnet-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:29:13.313687  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/kindnet-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:29:13.395298  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/kindnet-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:29:13.557095  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/kindnet-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:29:13.879288  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/kindnet-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:29:14.521163  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/kindnet-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:29:15.803278  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/kindnet-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:29:18.365552  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/kindnet-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-902916 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (44.313020838s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-902916 -n old-k8s-version-902916
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (44.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (16.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-ntfj2" [d9ca6970-d266-4b6b-9906-0cc6602f47ab] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1017 20:29:23.487248  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/kindnet-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-ntfj2" [d9ca6970-d266-4b6b-9906-0cc6602f47ab] Running
E1017 20:29:32.698380  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/auto-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:29:33.729468  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/kindnet-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.003852158s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (16.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-140964 -n no-preload-140964
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-140964 -n no-preload-140964: exit status 7 (77.666096ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-140964 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (57.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-140964 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-140964 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (57.178695872s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-140964 -n no-preload-140964
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (57.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-141127 -n embed-certs-141127
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-141127 -n embed-certs-141127: exit status 7 (79.732683ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-141127 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (62.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-141127 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-141127 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m2.277748824s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-141127 -n embed-certs-141127
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (62.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-ntfj2" [d9ca6970-d266-4b6b-9906-0cc6602f47ab] Running
E1017 20:29:38.838687  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/functional-993605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004075497s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-902916 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-902916 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-902916 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-902916 -n old-k8s-version-902916
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-902916 -n old-k8s-version-902916: exit status 2 (261.210876ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-902916 -n old-k8s-version-902916
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-902916 -n old-k8s-version-902916: exit status 2 (259.283694ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-902916 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-902916 -n old-k8s-version-902916
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-902916 -n old-k8s-version-902916
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.92s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (68.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-461731 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-461731 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m8.154853201s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (68.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-016497 -n default-k8s-diff-port-016497
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-016497 -n default-k8s-diff-port-016497: exit status 7 (88.346723ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-016497 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (83.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-016497 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1017 20:29:53.205571  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/calico-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:29:53.212006  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/calico-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:29:53.223516  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/calico-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:29:53.245192  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/calico-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:29:53.286673  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/calico-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:29:53.368241  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/calico-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:29:53.529971  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/calico-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:29:53.851743  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/calico-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:29:54.211641  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/kindnet-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:29:54.493174  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/calico-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:29:55.765282  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/functional-993605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:29:55.774898  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/calico-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:29:58.336708  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/calico-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:30:03.458021  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/calico-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:30:13.660249  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/auto-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:30:13.699926  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/calico-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-016497 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m22.708834729s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-016497 -n default-k8s-diff-port-016497
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (83.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6xpbj" [e6be0f51-2dd3-42c2-8cb9-9ccf407ca5f3] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1017 20:30:31.522286  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/custom-flannel-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:30:31.528840  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/custom-flannel-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:30:31.540360  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/custom-flannel-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:30:31.561963  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/custom-flannel-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:30:31.603470  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/custom-flannel-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:30:31.685015  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/custom-flannel-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:30:31.847065  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/custom-flannel-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6xpbj" [e6be0f51-2dd3-42c2-8cb9-9ccf407ca5f3] Running
E1017 20:30:32.169245  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/custom-flannel-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.00575293s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1017 20:30:32.811006  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/custom-flannel-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2fqk8" [596152d5-7fbc-4e3b-95d1-3a982d304369] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1017 20:30:34.092977  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/custom-flannel-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:30:34.182142  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/calico-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:30:35.173779  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/kindnet-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:30:36.654687  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/custom-flannel-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2fqk8" [596152d5-7fbc-4e3b-95d1-3a982d304369] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.004572579s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6xpbj" [e6be0f51-2dd3-42c2-8cb9-9ccf407ca5f3] Running
E1017 20:30:41.364603  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/addons-322722/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:30:41.776372  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/custom-flannel-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005443928s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-140964 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-140964 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-140964 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-140964 --alsologtostderr -v=1: (1.041158022s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-140964 -n no-preload-140964
E1017 20:30:43.703193  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/enable-default-cni-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-140964 -n no-preload-140964: exit status 2 (314.127859ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-140964 -n no-preload-140964
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-140964 -n no-preload-140964: exit status 2 (325.999361ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-140964 --alsologtostderr -v=1
E1017 20:30:44.345130  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/enable-default-cni-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-140964 -n no-preload-140964
E1017 20:30:45.626536  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/enable-default-cni-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-140964 -n no-preload-140964
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2fqk8" [596152d5-7fbc-4e3b-95d1-3a982d304369] Running
E1017 20:30:43.055443  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/enable-default-cni-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:30:43.061939  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/enable-default-cni-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:30:43.073479  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/enable-default-cni-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:30:43.095106  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/enable-default-cni-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:30:43.138041  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/enable-default-cni-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:30:43.219905  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/enable-default-cni-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:30:43.381300  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/enable-default-cni-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008753908s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-141127 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-141127 image list --format=json
E1017 20:30:48.188027  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/enable-default-cni-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-141127 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-141127 --alsologtostderr -v=1: (1.117835339s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-141127 -n embed-certs-141127
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-141127 -n embed-certs-141127: exit status 2 (308.843632ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-141127 -n embed-certs-141127
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-141127 -n embed-certs-141127: exit status 2 (301.274997ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-141127 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-141127 --alsologtostderr -v=1: (1.338577693s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-141127 -n embed-certs-141127
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-141127 -n embed-certs-141127
E1017 20:30:52.018034  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/custom-flannel-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-461731 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-461731 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.126455107s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-461731 --alsologtostderr -v=3
E1017 20:31:03.552313  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/enable-default-cni-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-461731 --alsologtostderr -v=3: (11.527306817s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-461731 -n newest-cni-461731
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-461731 -n newest-cni-461731: exit status 7 (75.225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-461731 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (34.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-461731 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-461731 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (34.367416303s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-461731 -n newest-cni-461731
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (34.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-kh852" [a3a5ab51-86eb-4f2b-888e-04ae4fe53ceb] Running
E1017 20:31:12.500107  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/custom-flannel-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:31:15.066037  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/flannel-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:31:15.072505  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/flannel-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:31:15.084003  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/flannel-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:31:15.105456  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/flannel-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:31:15.144015  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/calico-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:31:15.147446  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/flannel-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:31:15.228960  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/flannel-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:31:15.390585  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/flannel-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:31:15.711975  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/flannel-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:31:16.353382  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/flannel-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004897493s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-kh852" [a3a5ab51-86eb-4f2b-888e-04ae4fe53ceb] Running
E1017 20:31:17.634798  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/flannel-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:31:20.196249  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/flannel-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004955259s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-016497 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-016497 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-016497 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-016497 -n default-k8s-diff-port-016497
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-016497 -n default-k8s-diff-port-016497: exit status 2 (255.87727ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-016497 -n default-k8s-diff-port-016497
E1017 20:31:24.034106  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/enable-default-cni-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-016497 -n default-k8s-diff-port-016497: exit status 2 (256.83407ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-016497 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-016497 -n default-k8s-diff-port-016497
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-016497 -n default-k8s-diff-port-016497
E1017 20:31:25.318176  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/flannel-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-461731 image list --format=json
E1017 20:31:43.257389  113592 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/bridge-269519/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-461731 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-461731 --alsologtostderr -v=1: (1.787445362s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-461731 -n newest-cni-461731
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-461731 -n newest-cni-461731: exit status 2 (331.480026ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-461731 -n newest-cni-461731
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-461731 -n newest-cni-461731: exit status 2 (349.402126ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-461731 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-461731 -n newest-cni-461731
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-461731 -n newest-cni-461731
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.17s)

                                                
                                    

Test skip (40/330)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.31
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
117 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
119 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
123 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
158 TestGvisorAddon 0
180 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
259 TestNetworkPlugins/group/kubenet 3.52
268 TestNetworkPlugins/group/cilium 4.87
283 TestStartStop/group/disable-driver-mounts 0.2
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-322722 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-269519 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-269519

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-269519

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-269519

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-269519

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-269519

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-269519

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-269519

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-269519

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-269519

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-269519

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269519"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269519"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269519"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-269519

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269519"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269519"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-269519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-269519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-269519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-269519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-269519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-269519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-269519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-269519" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269519"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269519"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269519"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269519"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269519"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-269519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-269519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-269519" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269519"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269519"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269519"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269519"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269519"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21664-109682/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 17 Oct 2025 20:19:27 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.19:8443
name: NoKubernetes-273731
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21664-109682/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 17 Oct 2025 20:18:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.34:8443
name: cert-expiration-292976
contexts:
- context:
cluster: NoKubernetes-273731
extensions:
- extension:
last-update: Fri, 17 Oct 2025 20:19:27 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-273731
name: NoKubernetes-273731
- context:
cluster: cert-expiration-292976
extensions:
- extension:
last-update: Fri, 17 Oct 2025 20:18:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-292976
name: cert-expiration-292976
current-context: NoKubernetes-273731
kind: Config
users:
- name: NoKubernetes-273731
user:
client-certificate: /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/NoKubernetes-273731/client.crt
client-key: /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/NoKubernetes-273731/client.key
- name: cert-expiration-292976
user:
client-certificate: /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/cert-expiration-292976/client.crt
client-key: /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/cert-expiration-292976/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-269519

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269519"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269519"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269519"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269519"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269519"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269519"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269519"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269519"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269519"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269519"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269519"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269519"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269519"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269519"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269519"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269519"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269519"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269519"

                                                
                                                
----------------------- debugLogs end: kubenet-269519 [took: 3.340718917s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-269519" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-269519
--- SKIP: TestNetworkPlugins/group/kubenet (3.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-269519 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-269519

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-269519

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-269519

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-269519

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-269519

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-269519

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-269519

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-269519

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-269519

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-269519

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269519"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269519"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269519"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-269519

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269519"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269519"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-269519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-269519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-269519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-269519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-269519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-269519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-269519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-269519" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269519"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269519"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269519"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269519"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269519"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-269519

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-269519

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-269519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-269519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-269519

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-269519

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-269519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-269519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-269519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-269519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-269519" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269519"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269519"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269519"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269519"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269519"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21664-109682/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 17 Oct 2025 20:19:27 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.19:8443
name: NoKubernetes-273731
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21664-109682/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 17 Oct 2025 20:18:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.34:8443
name: cert-expiration-292976
contexts:
- context:
cluster: NoKubernetes-273731
extensions:
- extension:
last-update: Fri, 17 Oct 2025 20:19:27 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-273731
name: NoKubernetes-273731
- context:
cluster: cert-expiration-292976
extensions:
- extension:
last-update: Fri, 17 Oct 2025 20:18:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-292976
name: cert-expiration-292976
current-context: NoKubernetes-273731
kind: Config
users:
- name: NoKubernetes-273731
user:
client-certificate: /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/NoKubernetes-273731/client.crt
client-key: /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/NoKubernetes-273731/client.key
- name: cert-expiration-292976
user:
client-certificate: /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/cert-expiration-292976/client.crt
client-key: /home/jenkins/minikube-integration/21664-109682/.minikube/profiles/cert-expiration-292976/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-269519

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269519"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269519"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269519"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269519"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269519"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269519"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269519"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269519"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269519"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269519"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269519"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269519"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269519"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269519"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269519"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269519"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269519"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-269519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269519"

                                                
                                                
----------------------- debugLogs end: cilium-269519 [took: 4.394743554s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-269519" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-269519
--- SKIP: TestNetworkPlugins/group/cilium (4.87s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-141330" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-141330
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
Copied to clipboard