Test Report: KVM_Linux_crio 21681

                    
                      595bbf5b740d7896a57580209f3c1775d52404c7:2025-10-08:41822
                    
                

Test fail (3/324)

Order failed test Duration
37 TestAddons/parallel/Ingress 164.93
244 TestPreload 162.98
287 TestPause/serial/SecondStartNoReconfiguration 80.37
x
+
TestAddons/parallel/Ingress (164.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-527125 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-527125 replace --force -f testdata/nginx-ingress-v1.yaml
2025/10/08 14:13:39 [DEBUG] GET http://192.168.39.51:5000
addons_test.go:247: (dbg) Run:  kubectl --context addons-527125 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [a70ffabf-82a6-43e5-bbb6-0693c530d883] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [a70ffabf-82a6-43e5-bbb6-0693c530d883] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 17.003856348s
I1008 14:13:56.917332  361915 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-527125 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-527125 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.998802427s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-527125 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-527125 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.51
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-527125 -n addons-527125
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-527125 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-527125 logs -n 25: (1.453428802s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                ARGS                                                                                                                                                                                                                                                │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-422477                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-422477 │ jenkins │ v1.37.0 │ 08 Oct 25 14:09 UTC │ 08 Oct 25 14:09 UTC │
	│ start   │ --download-only -p binary-mirror-272062 --alsologtostderr --binary-mirror http://127.0.0.1:39803 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-272062 │ jenkins │ v1.37.0 │ 08 Oct 25 14:09 UTC │                     │
	│ delete  │ -p binary-mirror-272062                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ binary-mirror-272062 │ jenkins │ v1.37.0 │ 08 Oct 25 14:09 UTC │ 08 Oct 25 14:09 UTC │
	│ addons  │ enable dashboard -p addons-527125                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-527125        │ jenkins │ v1.37.0 │ 08 Oct 25 14:09 UTC │                     │
	│ addons  │ disable dashboard -p addons-527125                                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-527125        │ jenkins │ v1.37.0 │ 08 Oct 25 14:09 UTC │                     │
	│ start   │ -p addons-527125 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-527125        │ jenkins │ v1.37.0 │ 08 Oct 25 14:09 UTC │ 08 Oct 25 14:13 UTC │
	│ addons  │ addons-527125 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-527125        │ jenkins │ v1.37.0 │ 08 Oct 25 14:13 UTC │ 08 Oct 25 14:13 UTC │
	│ addons  │ addons-527125 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-527125        │ jenkins │ v1.37.0 │ 08 Oct 25 14:13 UTC │ 08 Oct 25 14:13 UTC │
	│ addons  │ enable headlamp -p addons-527125 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-527125        │ jenkins │ v1.37.0 │ 08 Oct 25 14:13 UTC │ 08 Oct 25 14:13 UTC │
	│ addons  │ addons-527125 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-527125        │ jenkins │ v1.37.0 │ 08 Oct 25 14:13 UTC │ 08 Oct 25 14:13 UTC │
	│ addons  │ addons-527125 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-527125        │ jenkins │ v1.37.0 │ 08 Oct 25 14:13 UTC │ 08 Oct 25 14:13 UTC │
	│ addons  │ addons-527125 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-527125        │ jenkins │ v1.37.0 │ 08 Oct 25 14:13 UTC │ 08 Oct 25 14:13 UTC │
	│ addons  │ addons-527125 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-527125        │ jenkins │ v1.37.0 │ 08 Oct 25 14:13 UTC │ 08 Oct 25 14:13 UTC │
	│ ip      │ addons-527125 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-527125        │ jenkins │ v1.37.0 │ 08 Oct 25 14:13 UTC │ 08 Oct 25 14:13 UTC │
	│ addons  │ addons-527125 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-527125        │ jenkins │ v1.37.0 │ 08 Oct 25 14:13 UTC │ 08 Oct 25 14:13 UTC │
	│ addons  │ addons-527125 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-527125        │ jenkins │ v1.37.0 │ 08 Oct 25 14:13 UTC │ 08 Oct 25 14:13 UTC │
	│ addons  │ addons-527125 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-527125        │ jenkins │ v1.37.0 │ 08 Oct 25 14:13 UTC │ 08 Oct 25 14:13 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-527125                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-527125        │ jenkins │ v1.37.0 │ 08 Oct 25 14:13 UTC │ 08 Oct 25 14:13 UTC │
	│ addons  │ addons-527125 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-527125        │ jenkins │ v1.37.0 │ 08 Oct 25 14:13 UTC │ 08 Oct 25 14:13 UTC │
	│ ssh     │ addons-527125 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-527125        │ jenkins │ v1.37.0 │ 08 Oct 25 14:13 UTC │                     │
	│ ssh     │ addons-527125 ssh cat /opt/local-path-provisioner/pvc-58128b55-1842-4e77-9262-1af4a121e42e_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                                                  │ addons-527125        │ jenkins │ v1.37.0 │ 08 Oct 25 14:13 UTC │ 08 Oct 25 14:14 UTC │
	│ addons  │ addons-527125 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-527125        │ jenkins │ v1.37.0 │ 08 Oct 25 14:14 UTC │ 08 Oct 25 14:14 UTC │
	│ addons  │ addons-527125 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-527125        │ jenkins │ v1.37.0 │ 08 Oct 25 14:14 UTC │ 08 Oct 25 14:14 UTC │
	│ addons  │ addons-527125 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-527125        │ jenkins │ v1.37.0 │ 08 Oct 25 14:14 UTC │ 08 Oct 25 14:14 UTC │
	│ ip      │ addons-527125 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-527125        │ jenkins │ v1.37.0 │ 08 Oct 25 14:16 UTC │ 08 Oct 25 14:16 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 14:09:40
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 14:09:40.816023  362613 out.go:360] Setting OutFile to fd 1 ...
	I1008 14:09:40.816257  362613 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:09:40.816265  362613 out.go:374] Setting ErrFile to fd 2...
	I1008 14:09:40.816270  362613 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:09:40.816473  362613 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-357044/.minikube/bin
	I1008 14:09:40.817001  362613 out.go:368] Setting JSON to false
	I1008 14:09:40.817941  362613 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3113,"bootTime":1759929468,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 14:09:40.818026  362613 start.go:141] virtualization: kvm guest
	I1008 14:09:40.820113  362613 out.go:179] * [addons-527125] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 14:09:40.821657  362613 notify.go:220] Checking for updates...
	I1008 14:09:40.821724  362613 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 14:09:40.823150  362613 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 14:09:40.824521  362613 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-357044/kubeconfig
	I1008 14:09:40.826101  362613 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-357044/.minikube
	I1008 14:09:40.827334  362613 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 14:09:40.828557  362613 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 14:09:40.830113  362613 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 14:09:40.863544  362613 out.go:179] * Using the kvm2 driver based on user configuration
	I1008 14:09:40.865071  362613 start.go:305] selected driver: kvm2
	I1008 14:09:40.865093  362613 start.go:925] validating driver "kvm2" against <nil>
	I1008 14:09:40.865106  362613 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 14:09:40.865873  362613 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 14:09:40.865980  362613 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21681-357044/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 14:09:40.880691  362613 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1008 14:09:40.880731  362613 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21681-357044/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 14:09:40.895680  362613 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1008 14:09:40.895737  362613 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1008 14:09:40.896024  362613 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 14:09:40.896055  362613 cni.go:84] Creating CNI manager for ""
	I1008 14:09:40.896097  362613 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 14:09:40.896108  362613 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1008 14:09:40.896155  362613 start.go:349] cluster config:
	{Name:addons-527125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-527125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1008 14:09:40.896249  362613 iso.go:125] acquiring lock: {Name:mkaa45da6237a5a16f5f1d676ea2e57ba969b9e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 14:09:40.898444  362613 out.go:179] * Starting "addons-527125" primary control-plane node in "addons-527125" cluster
	I1008 14:09:40.899983  362613 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 14:09:40.900056  362613 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-357044/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1008 14:09:40.900073  362613 cache.go:58] Caching tarball of preloaded images
	I1008 14:09:40.900194  362613 preload.go:233] Found /home/jenkins/minikube-integration/21681-357044/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 14:09:40.900208  362613 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 14:09:40.900588  362613 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/config.json ...
	I1008 14:09:40.900623  362613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/config.json: {Name:mkf3cb08e8ae1685cb43cad0550262a5d51ea0a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:09:40.900802  362613 start.go:360] acquireMachinesLock for addons-527125: {Name:mka12a7774d0aa7dccf7190e47a0dc3a854191d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 14:09:40.900880  362613 start.go:364] duration metric: took 61.913µs to acquireMachinesLock for "addons-527125"
	I1008 14:09:40.900907  362613 start.go:93] Provisioning new machine with config: &{Name:addons-527125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-527125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 14:09:40.900982  362613 start.go:125] createHost starting for "" (driver="kvm2")
	I1008 14:09:40.902782  362613 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1008 14:09:40.902956  362613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:09:40.903010  362613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:09:40.916952  362613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46011
	I1008 14:09:40.917571  362613 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:09:40.918219  362613 main.go:141] libmachine: Using API Version  1
	I1008 14:09:40.918241  362613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:09:40.918654  362613 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:09:40.918888  362613 main.go:141] libmachine: (addons-527125) Calling .GetMachineName
	I1008 14:09:40.919084  362613 main.go:141] libmachine: (addons-527125) Calling .DriverName
	I1008 14:09:40.919281  362613 start.go:159] libmachine.API.Create for "addons-527125" (driver="kvm2")
	I1008 14:09:40.919316  362613 client.go:168] LocalClient.Create starting
	I1008 14:09:40.919384  362613 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21681-357044/.minikube/certs/ca.pem
	I1008 14:09:41.236631  362613 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21681-357044/.minikube/certs/cert.pem
	I1008 14:09:41.571380  362613 main.go:141] libmachine: Running pre-create checks...
	I1008 14:09:41.571403  362613 main.go:141] libmachine: (addons-527125) Calling .PreCreateCheck
	I1008 14:09:41.571951  362613 main.go:141] libmachine: (addons-527125) Calling .GetConfigRaw
	I1008 14:09:41.572471  362613 main.go:141] libmachine: Creating machine...
	I1008 14:09:41.572489  362613 main.go:141] libmachine: (addons-527125) Calling .Create
	I1008 14:09:41.572661  362613 main.go:141] libmachine: (addons-527125) creating domain...
	I1008 14:09:41.572680  362613 main.go:141] libmachine: (addons-527125) creating network...
	I1008 14:09:41.574255  362613 main.go:141] libmachine: (addons-527125) DBG | found existing default network
	I1008 14:09:41.574439  362613 main.go:141] libmachine: (addons-527125) DBG | <network>
	I1008 14:09:41.574460  362613 main.go:141] libmachine: (addons-527125) DBG |   <name>default</name>
	I1008 14:09:41.574472  362613 main.go:141] libmachine: (addons-527125) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1008 14:09:41.574494  362613 main.go:141] libmachine: (addons-527125) DBG |   <forward mode='nat'>
	I1008 14:09:41.574502  362613 main.go:141] libmachine: (addons-527125) DBG |     <nat>
	I1008 14:09:41.574511  362613 main.go:141] libmachine: (addons-527125) DBG |       <port start='1024' end='65535'/>
	I1008 14:09:41.574520  362613 main.go:141] libmachine: (addons-527125) DBG |     </nat>
	I1008 14:09:41.574531  362613 main.go:141] libmachine: (addons-527125) DBG |   </forward>
	I1008 14:09:41.574542  362613 main.go:141] libmachine: (addons-527125) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1008 14:09:41.574551  362613 main.go:141] libmachine: (addons-527125) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1008 14:09:41.574563  362613 main.go:141] libmachine: (addons-527125) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1008 14:09:41.574570  362613 main.go:141] libmachine: (addons-527125) DBG |     <dhcp>
	I1008 14:09:41.574581  362613 main.go:141] libmachine: (addons-527125) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1008 14:09:41.574597  362613 main.go:141] libmachine: (addons-527125) DBG |     </dhcp>
	I1008 14:09:41.574632  362613 main.go:141] libmachine: (addons-527125) DBG |   </ip>
	I1008 14:09:41.574659  362613 main.go:141] libmachine: (addons-527125) DBG | </network>
	I1008 14:09:41.574674  362613 main.go:141] libmachine: (addons-527125) DBG | 
	I1008 14:09:41.575242  362613 main.go:141] libmachine: (addons-527125) DBG | I1008 14:09:41.575017  362641 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000136c0}
	I1008 14:09:41.575270  362613 main.go:141] libmachine: (addons-527125) DBG | defining private network:
	I1008 14:09:41.575281  362613 main.go:141] libmachine: (addons-527125) DBG | 
	I1008 14:09:41.575287  362613 main.go:141] libmachine: (addons-527125) DBG | <network>
	I1008 14:09:41.575295  362613 main.go:141] libmachine: (addons-527125) DBG |   <name>mk-addons-527125</name>
	I1008 14:09:41.575304  362613 main.go:141] libmachine: (addons-527125) DBG |   <dns enable='no'/>
	I1008 14:09:41.575343  362613 main.go:141] libmachine: (addons-527125) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1008 14:09:41.575399  362613 main.go:141] libmachine: (addons-527125) DBG |     <dhcp>
	I1008 14:09:41.575415  362613 main.go:141] libmachine: (addons-527125) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1008 14:09:41.575425  362613 main.go:141] libmachine: (addons-527125) DBG |     </dhcp>
	I1008 14:09:41.575435  362613 main.go:141] libmachine: (addons-527125) DBG |   </ip>
	I1008 14:09:41.575441  362613 main.go:141] libmachine: (addons-527125) DBG | </network>
	I1008 14:09:41.575455  362613 main.go:141] libmachine: (addons-527125) DBG | 
	I1008 14:09:41.581452  362613 main.go:141] libmachine: (addons-527125) DBG | creating private network mk-addons-527125 192.168.39.0/24...
	I1008 14:09:41.650741  362613 main.go:141] libmachine: (addons-527125) DBG | private network mk-addons-527125 192.168.39.0/24 created
	I1008 14:09:41.651043  362613 main.go:141] libmachine: (addons-527125) DBG | <network>
	I1008 14:09:41.651059  362613 main.go:141] libmachine: (addons-527125) DBG |   <name>mk-addons-527125</name>
	I1008 14:09:41.651068  362613 main.go:141] libmachine: (addons-527125) setting up store path in /home/jenkins/minikube-integration/21681-357044/.minikube/machines/addons-527125 ...
	I1008 14:09:41.651074  362613 main.go:141] libmachine: (addons-527125) DBG |   <uuid>975009ad-1887-424b-98c8-1afcbc3a0609</uuid>
	I1008 14:09:41.651081  362613 main.go:141] libmachine: (addons-527125) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I1008 14:09:41.651086  362613 main.go:141] libmachine: (addons-527125) DBG |   <mac address='52:54:00:0e:d7:26'/>
	I1008 14:09:41.651092  362613 main.go:141] libmachine: (addons-527125) DBG |   <dns enable='no'/>
	I1008 14:09:41.651109  362613 main.go:141] libmachine: (addons-527125) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1008 14:09:41.651120  362613 main.go:141] libmachine: (addons-527125) building disk image from file:///home/jenkins/minikube-integration/21681-357044/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1008 14:09:41.651128  362613 main.go:141] libmachine: (addons-527125) DBG |     <dhcp>
	I1008 14:09:41.651138  362613 main.go:141] libmachine: (addons-527125) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1008 14:09:41.651143  362613 main.go:141] libmachine: (addons-527125) DBG |     </dhcp>
	I1008 14:09:41.651147  362613 main.go:141] libmachine: (addons-527125) DBG |   </ip>
	I1008 14:09:41.651152  362613 main.go:141] libmachine: (addons-527125) DBG | </network>
	I1008 14:09:41.651162  362613 main.go:141] libmachine: (addons-527125) DBG | 
	I1008 14:09:41.651176  362613 main.go:141] libmachine: (addons-527125) DBG | I1008 14:09:41.651028  362641 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21681-357044/.minikube
	I1008 14:09:41.651228  362613 main.go:141] libmachine: (addons-527125) Downloading /home/jenkins/minikube-integration/21681-357044/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21681-357044/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I1008 14:09:41.936068  362613 main.go:141] libmachine: (addons-527125) DBG | I1008 14:09:41.935899  362641 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21681-357044/.minikube/machines/addons-527125/id_rsa...
	I1008 14:09:42.013402  362613 main.go:141] libmachine: (addons-527125) DBG | I1008 14:09:42.013201  362641 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21681-357044/.minikube/machines/addons-527125/addons-527125.rawdisk...
	I1008 14:09:42.013449  362613 main.go:141] libmachine: (addons-527125) DBG | Writing magic tar header
	I1008 14:09:42.013465  362613 main.go:141] libmachine: (addons-527125) DBG | Writing SSH key tar header
	I1008 14:09:42.013473  362613 main.go:141] libmachine: (addons-527125) DBG | I1008 14:09:42.013324  362641 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21681-357044/.minikube/machines/addons-527125 ...
	I1008 14:09:42.013497  362613 main.go:141] libmachine: (addons-527125) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21681-357044/.minikube/machines/addons-527125
	I1008 14:09:42.013551  362613 main.go:141] libmachine: (addons-527125) setting executable bit set on /home/jenkins/minikube-integration/21681-357044/.minikube/machines/addons-527125 (perms=drwx------)
	I1008 14:09:42.013597  362613 main.go:141] libmachine: (addons-527125) setting executable bit set on /home/jenkins/minikube-integration/21681-357044/.minikube/machines (perms=drwxr-xr-x)
	I1008 14:09:42.013610  362613 main.go:141] libmachine: (addons-527125) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21681-357044/.minikube/machines
	I1008 14:09:42.013620  362613 main.go:141] libmachine: (addons-527125) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21681-357044/.minikube
	I1008 14:09:42.013627  362613 main.go:141] libmachine: (addons-527125) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21681-357044
	I1008 14:09:42.013640  362613 main.go:141] libmachine: (addons-527125) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1008 14:09:42.013645  362613 main.go:141] libmachine: (addons-527125) DBG | checking permissions on dir: /home/jenkins
	I1008 14:09:42.013651  362613 main.go:141] libmachine: (addons-527125) DBG | checking permissions on dir: /home
	I1008 14:09:42.013657  362613 main.go:141] libmachine: (addons-527125) DBG | skipping /home - not owner
	I1008 14:09:42.013681  362613 main.go:141] libmachine: (addons-527125) setting executable bit set on /home/jenkins/minikube-integration/21681-357044/.minikube (perms=drwxr-xr-x)
	I1008 14:09:42.013691  362613 main.go:141] libmachine: (addons-527125) setting executable bit set on /home/jenkins/minikube-integration/21681-357044 (perms=drwxrwxr-x)
	I1008 14:09:42.013698  362613 main.go:141] libmachine: (addons-527125) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1008 14:09:42.013703  362613 main.go:141] libmachine: (addons-527125) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1008 14:09:42.013709  362613 main.go:141] libmachine: (addons-527125) defining domain...
	I1008 14:09:42.015016  362613 main.go:141] libmachine: (addons-527125) defining domain using XML: 
	I1008 14:09:42.015046  362613 main.go:141] libmachine: (addons-527125) <domain type='kvm'>
	I1008 14:09:42.015065  362613 main.go:141] libmachine: (addons-527125)   <name>addons-527125</name>
	I1008 14:09:42.015077  362613 main.go:141] libmachine: (addons-527125)   <memory unit='MiB'>4096</memory>
	I1008 14:09:42.015086  362613 main.go:141] libmachine: (addons-527125)   <vcpu>2</vcpu>
	I1008 14:09:42.015096  362613 main.go:141] libmachine: (addons-527125)   <features>
	I1008 14:09:42.015103  362613 main.go:141] libmachine: (addons-527125)     <acpi/>
	I1008 14:09:42.015117  362613 main.go:141] libmachine: (addons-527125)     <apic/>
	I1008 14:09:42.015127  362613 main.go:141] libmachine: (addons-527125)     <pae/>
	I1008 14:09:42.015138  362613 main.go:141] libmachine: (addons-527125)   </features>
	I1008 14:09:42.015173  362613 main.go:141] libmachine: (addons-527125)   <cpu mode='host-passthrough'>
	I1008 14:09:42.015195  362613 main.go:141] libmachine: (addons-527125)   </cpu>
	I1008 14:09:42.015202  362613 main.go:141] libmachine: (addons-527125)   <os>
	I1008 14:09:42.015209  362613 main.go:141] libmachine: (addons-527125)     <type>hvm</type>
	I1008 14:09:42.015246  362613 main.go:141] libmachine: (addons-527125)     <boot dev='cdrom'/>
	I1008 14:09:42.015272  362613 main.go:141] libmachine: (addons-527125)     <boot dev='hd'/>
	I1008 14:09:42.015298  362613 main.go:141] libmachine: (addons-527125)     <bootmenu enable='no'/>
	I1008 14:09:42.015309  362613 main.go:141] libmachine: (addons-527125)   </os>
	I1008 14:09:42.015318  362613 main.go:141] libmachine: (addons-527125)   <devices>
	I1008 14:09:42.015328  362613 main.go:141] libmachine: (addons-527125)     <disk type='file' device='cdrom'>
	I1008 14:09:42.015341  362613 main.go:141] libmachine: (addons-527125)       <source file='/home/jenkins/minikube-integration/21681-357044/.minikube/machines/addons-527125/boot2docker.iso'/>
	I1008 14:09:42.015366  362613 main.go:141] libmachine: (addons-527125)       <target dev='hdc' bus='scsi'/>
	I1008 14:09:42.015380  362613 main.go:141] libmachine: (addons-527125)       <readonly/>
	I1008 14:09:42.015394  362613 main.go:141] libmachine: (addons-527125)     </disk>
	I1008 14:09:42.015416  362613 main.go:141] libmachine: (addons-527125)     <disk type='file' device='disk'>
	I1008 14:09:42.015440  362613 main.go:141] libmachine: (addons-527125)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1008 14:09:42.015466  362613 main.go:141] libmachine: (addons-527125)       <source file='/home/jenkins/minikube-integration/21681-357044/.minikube/machines/addons-527125/addons-527125.rawdisk'/>
	I1008 14:09:42.015477  362613 main.go:141] libmachine: (addons-527125)       <target dev='hda' bus='virtio'/>
	I1008 14:09:42.015485  362613 main.go:141] libmachine: (addons-527125)     </disk>
	I1008 14:09:42.015494  362613 main.go:141] libmachine: (addons-527125)     <interface type='network'>
	I1008 14:09:42.015500  362613 main.go:141] libmachine: (addons-527125)       <source network='mk-addons-527125'/>
	I1008 14:09:42.015510  362613 main.go:141] libmachine: (addons-527125)       <model type='virtio'/>
	I1008 14:09:42.015527  362613 main.go:141] libmachine: (addons-527125)     </interface>
	I1008 14:09:42.015537  362613 main.go:141] libmachine: (addons-527125)     <interface type='network'>
	I1008 14:09:42.015544  362613 main.go:141] libmachine: (addons-527125)       <source network='default'/>
	I1008 14:09:42.015551  362613 main.go:141] libmachine: (addons-527125)       <model type='virtio'/>
	I1008 14:09:42.015555  362613 main.go:141] libmachine: (addons-527125)     </interface>
	I1008 14:09:42.015563  362613 main.go:141] libmachine: (addons-527125)     <serial type='pty'>
	I1008 14:09:42.015571  362613 main.go:141] libmachine: (addons-527125)       <target port='0'/>
	I1008 14:09:42.015578  362613 main.go:141] libmachine: (addons-527125)     </serial>
	I1008 14:09:42.015586  362613 main.go:141] libmachine: (addons-527125)     <console type='pty'>
	I1008 14:09:42.015598  362613 main.go:141] libmachine: (addons-527125)       <target type='serial' port='0'/>
	I1008 14:09:42.015606  362613 main.go:141] libmachine: (addons-527125)     </console>
	I1008 14:09:42.015623  362613 main.go:141] libmachine: (addons-527125)     <rng model='virtio'>
	I1008 14:09:42.015635  362613 main.go:141] libmachine: (addons-527125)       <backend model='random'>/dev/random</backend>
	I1008 14:09:42.015644  362613 main.go:141] libmachine: (addons-527125)     </rng>
	I1008 14:09:42.015728  362613 main.go:141] libmachine: (addons-527125)   </devices>
	I1008 14:09:42.015762  362613 main.go:141] libmachine: (addons-527125) </domain>
	I1008 14:09:42.015781  362613 main.go:141] libmachine: (addons-527125) 
	I1008 14:09:42.128693  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:4d:36:cd in network default
	I1008 14:09:42.129529  362613 main.go:141] libmachine: (addons-527125) starting domain...
	I1008 14:09:42.129555  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:09:42.129563  362613 main.go:141] libmachine: (addons-527125) ensuring networks are active...
	I1008 14:09:42.130380  362613 main.go:141] libmachine: (addons-527125) Ensuring network default is active
	I1008 14:09:42.130773  362613 main.go:141] libmachine: (addons-527125) Ensuring network mk-addons-527125 is active
	I1008 14:09:42.131568  362613 main.go:141] libmachine: (addons-527125) getting domain XML...
	I1008 14:09:42.132768  362613 main.go:141] libmachine: (addons-527125) DBG | starting domain XML:
	I1008 14:09:42.132792  362613 main.go:141] libmachine: (addons-527125) DBG | <domain type='kvm'>
	I1008 14:09:42.132802  362613 main.go:141] libmachine: (addons-527125) DBG |   <name>addons-527125</name>
	I1008 14:09:42.132811  362613 main.go:141] libmachine: (addons-527125) DBG |   <uuid>32f92503-9788-4783-924a-0b79d559c3f2</uuid>
	I1008 14:09:42.132819  362613 main.go:141] libmachine: (addons-527125) DBG |   <memory unit='KiB'>4194304</memory>
	I1008 14:09:42.132824  362613 main.go:141] libmachine: (addons-527125) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I1008 14:09:42.132830  362613 main.go:141] libmachine: (addons-527125) DBG |   <vcpu placement='static'>2</vcpu>
	I1008 14:09:42.132840  362613 main.go:141] libmachine: (addons-527125) DBG |   <os>
	I1008 14:09:42.132848  362613 main.go:141] libmachine: (addons-527125) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1008 14:09:42.132855  362613 main.go:141] libmachine: (addons-527125) DBG |     <boot dev='cdrom'/>
	I1008 14:09:42.132860  362613 main.go:141] libmachine: (addons-527125) DBG |     <boot dev='hd'/>
	I1008 14:09:42.132869  362613 main.go:141] libmachine: (addons-527125) DBG |     <bootmenu enable='no'/>
	I1008 14:09:42.132905  362613 main.go:141] libmachine: (addons-527125) DBG |   </os>
	I1008 14:09:42.132928  362613 main.go:141] libmachine: (addons-527125) DBG |   <features>
	I1008 14:09:42.132937  362613 main.go:141] libmachine: (addons-527125) DBG |     <acpi/>
	I1008 14:09:42.132953  362613 main.go:141] libmachine: (addons-527125) DBG |     <apic/>
	I1008 14:09:42.132981  362613 main.go:141] libmachine: (addons-527125) DBG |     <pae/>
	I1008 14:09:42.132991  362613 main.go:141] libmachine: (addons-527125) DBG |   </features>
	I1008 14:09:42.133006  362613 main.go:141] libmachine: (addons-527125) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1008 14:09:42.133016  362613 main.go:141] libmachine: (addons-527125) DBG |   <clock offset='utc'/>
	I1008 14:09:42.133029  362613 main.go:141] libmachine: (addons-527125) DBG |   <on_poweroff>destroy</on_poweroff>
	I1008 14:09:42.133043  362613 main.go:141] libmachine: (addons-527125) DBG |   <on_reboot>restart</on_reboot>
	I1008 14:09:42.133055  362613 main.go:141] libmachine: (addons-527125) DBG |   <on_crash>destroy</on_crash>
	I1008 14:09:42.133059  362613 main.go:141] libmachine: (addons-527125) DBG |   <devices>
	I1008 14:09:42.133068  362613 main.go:141] libmachine: (addons-527125) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1008 14:09:42.133073  362613 main.go:141] libmachine: (addons-527125) DBG |     <disk type='file' device='cdrom'>
	I1008 14:09:42.133081  362613 main.go:141] libmachine: (addons-527125) DBG |       <driver name='qemu' type='raw'/>
	I1008 14:09:42.133088  362613 main.go:141] libmachine: (addons-527125) DBG |       <source file='/home/jenkins/minikube-integration/21681-357044/.minikube/machines/addons-527125/boot2docker.iso'/>
	I1008 14:09:42.133099  362613 main.go:141] libmachine: (addons-527125) DBG |       <target dev='hdc' bus='scsi'/>
	I1008 14:09:42.133109  362613 main.go:141] libmachine: (addons-527125) DBG |       <readonly/>
	I1008 14:09:42.133132  362613 main.go:141] libmachine: (addons-527125) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1008 14:09:42.133143  362613 main.go:141] libmachine: (addons-527125) DBG |     </disk>
	I1008 14:09:42.133149  362613 main.go:141] libmachine: (addons-527125) DBG |     <disk type='file' device='disk'>
	I1008 14:09:42.133168  362613 main.go:141] libmachine: (addons-527125) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1008 14:09:42.133179  362613 main.go:141] libmachine: (addons-527125) DBG |       <source file='/home/jenkins/minikube-integration/21681-357044/.minikube/machines/addons-527125/addons-527125.rawdisk'/>
	I1008 14:09:42.133183  362613 main.go:141] libmachine: (addons-527125) DBG |       <target dev='hda' bus='virtio'/>
	I1008 14:09:42.133190  362613 main.go:141] libmachine: (addons-527125) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1008 14:09:42.133196  362613 main.go:141] libmachine: (addons-527125) DBG |     </disk>
	I1008 14:09:42.133201  362613 main.go:141] libmachine: (addons-527125) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1008 14:09:42.133207  362613 main.go:141] libmachine: (addons-527125) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1008 14:09:42.133249  362613 main.go:141] libmachine: (addons-527125) DBG |     </controller>
	I1008 14:09:42.133276  362613 main.go:141] libmachine: (addons-527125) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1008 14:09:42.133287  362613 main.go:141] libmachine: (addons-527125) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1008 14:09:42.133311  362613 main.go:141] libmachine: (addons-527125) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1008 14:09:42.133323  362613 main.go:141] libmachine: (addons-527125) DBG |     </controller>
	I1008 14:09:42.133334  362613 main.go:141] libmachine: (addons-527125) DBG |     <interface type='network'>
	I1008 14:09:42.133343  362613 main.go:141] libmachine: (addons-527125) DBG |       <mac address='52:54:00:74:e6:8d'/>
	I1008 14:09:42.133366  362613 main.go:141] libmachine: (addons-527125) DBG |       <source network='mk-addons-527125'/>
	I1008 14:09:42.133376  362613 main.go:141] libmachine: (addons-527125) DBG |       <model type='virtio'/>
	I1008 14:09:42.133384  362613 main.go:141] libmachine: (addons-527125) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1008 14:09:42.133392  362613 main.go:141] libmachine: (addons-527125) DBG |     </interface>
	I1008 14:09:42.133399  362613 main.go:141] libmachine: (addons-527125) DBG |     <interface type='network'>
	I1008 14:09:42.133408  362613 main.go:141] libmachine: (addons-527125) DBG |       <mac address='52:54:00:4d:36:cd'/>
	I1008 14:09:42.133414  362613 main.go:141] libmachine: (addons-527125) DBG |       <source network='default'/>
	I1008 14:09:42.133422  362613 main.go:141] libmachine: (addons-527125) DBG |       <model type='virtio'/>
	I1008 14:09:42.133433  362613 main.go:141] libmachine: (addons-527125) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1008 14:09:42.133441  362613 main.go:141] libmachine: (addons-527125) DBG |     </interface>
	I1008 14:09:42.133457  362613 main.go:141] libmachine: (addons-527125) DBG |     <serial type='pty'>
	I1008 14:09:42.133470  362613 main.go:141] libmachine: (addons-527125) DBG |       <target type='isa-serial' port='0'>
	I1008 14:09:42.133476  362613 main.go:141] libmachine: (addons-527125) DBG |         <model name='isa-serial'/>
	I1008 14:09:42.133483  362613 main.go:141] libmachine: (addons-527125) DBG |       </target>
	I1008 14:09:42.133489  362613 main.go:141] libmachine: (addons-527125) DBG |     </serial>
	I1008 14:09:42.133499  362613 main.go:141] libmachine: (addons-527125) DBG |     <console type='pty'>
	I1008 14:09:42.133508  362613 main.go:141] libmachine: (addons-527125) DBG |       <target type='serial' port='0'/>
	I1008 14:09:42.133518  362613 main.go:141] libmachine: (addons-527125) DBG |     </console>
	I1008 14:09:42.133527  362613 main.go:141] libmachine: (addons-527125) DBG |     <input type='mouse' bus='ps2'/>
	I1008 14:09:42.133542  362613 main.go:141] libmachine: (addons-527125) DBG |     <input type='keyboard' bus='ps2'/>
	I1008 14:09:42.133550  362613 main.go:141] libmachine: (addons-527125) DBG |     <audio id='1' type='none'/>
	I1008 14:09:42.133556  362613 main.go:141] libmachine: (addons-527125) DBG |     <memballoon model='virtio'>
	I1008 14:09:42.133564  362613 main.go:141] libmachine: (addons-527125) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1008 14:09:42.133569  362613 main.go:141] libmachine: (addons-527125) DBG |     </memballoon>
	I1008 14:09:42.133575  362613 main.go:141] libmachine: (addons-527125) DBG |     <rng model='virtio'>
	I1008 14:09:42.133597  362613 main.go:141] libmachine: (addons-527125) DBG |       <backend model='random'>/dev/random</backend>
	I1008 14:09:42.133605  362613 main.go:141] libmachine: (addons-527125) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1008 14:09:42.133610  362613 main.go:141] libmachine: (addons-527125) DBG |     </rng>
	I1008 14:09:42.133619  362613 main.go:141] libmachine: (addons-527125) DBG |   </devices>
	I1008 14:09:42.133626  362613 main.go:141] libmachine: (addons-527125) DBG | </domain>
	I1008 14:09:42.133630  362613 main.go:141] libmachine: (addons-527125) DBG | 
	I1008 14:09:43.538014  362613 main.go:141] libmachine: (addons-527125) waiting for domain to start...
	I1008 14:09:43.539280  362613 main.go:141] libmachine: (addons-527125) domain is now running
	I1008 14:09:43.539308  362613 main.go:141] libmachine: (addons-527125) waiting for IP...
	I1008 14:09:43.540155  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:09:43.540646  362613 main.go:141] libmachine: (addons-527125) DBG | no network interface addresses found for domain addons-527125 (source=lease)
	I1008 14:09:43.540671  362613 main.go:141] libmachine: (addons-527125) DBG | trying to list again with source=arp
	I1008 14:09:43.540919  362613 main.go:141] libmachine: (addons-527125) DBG | unable to find current IP address of domain addons-527125 in network mk-addons-527125 (interfaces detected: [])
	I1008 14:09:43.540996  362613 main.go:141] libmachine: (addons-527125) DBG | I1008 14:09:43.540939  362641 retry.go:31] will retry after 266.435509ms: waiting for domain to come up
	I1008 14:09:43.809865  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:09:43.810488  362613 main.go:141] libmachine: (addons-527125) DBG | no network interface addresses found for domain addons-527125 (source=lease)
	I1008 14:09:43.810529  362613 main.go:141] libmachine: (addons-527125) DBG | trying to list again with source=arp
	I1008 14:09:43.810908  362613 main.go:141] libmachine: (addons-527125) DBG | unable to find current IP address of domain addons-527125 in network mk-addons-527125 (interfaces detected: [])
	I1008 14:09:43.810972  362613 main.go:141] libmachine: (addons-527125) DBG | I1008 14:09:43.810867  362641 retry.go:31] will retry after 379.924008ms: waiting for domain to come up
	I1008 14:09:44.192753  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:09:44.193257  362613 main.go:141] libmachine: (addons-527125) DBG | no network interface addresses found for domain addons-527125 (source=lease)
	I1008 14:09:44.193288  362613 main.go:141] libmachine: (addons-527125) DBG | trying to list again with source=arp
	I1008 14:09:44.193686  362613 main.go:141] libmachine: (addons-527125) DBG | unable to find current IP address of domain addons-527125 in network mk-addons-527125 (interfaces detected: [])
	I1008 14:09:44.193717  362613 main.go:141] libmachine: (addons-527125) DBG | I1008 14:09:44.193633  362641 retry.go:31] will retry after 396.760795ms: waiting for domain to come up
	I1008 14:09:44.592389  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:09:44.592960  362613 main.go:141] libmachine: (addons-527125) DBG | no network interface addresses found for domain addons-527125 (source=lease)
	I1008 14:09:44.592985  362613 main.go:141] libmachine: (addons-527125) DBG | trying to list again with source=arp
	I1008 14:09:44.593233  362613 main.go:141] libmachine: (addons-527125) DBG | unable to find current IP address of domain addons-527125 in network mk-addons-527125 (interfaces detected: [])
	I1008 14:09:44.593299  362613 main.go:141] libmachine: (addons-527125) DBG | I1008 14:09:44.593221  362641 retry.go:31] will retry after 573.559851ms: waiting for domain to come up
	I1008 14:09:45.169049  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:09:45.169621  362613 main.go:141] libmachine: (addons-527125) DBG | no network interface addresses found for domain addons-527125 (source=lease)
	I1008 14:09:45.169653  362613 main.go:141] libmachine: (addons-527125) DBG | trying to list again with source=arp
	I1008 14:09:45.169898  362613 main.go:141] libmachine: (addons-527125) DBG | unable to find current IP address of domain addons-527125 in network mk-addons-527125 (interfaces detected: [])
	I1008 14:09:45.169922  362613 main.go:141] libmachine: (addons-527125) DBG | I1008 14:09:45.169876  362641 retry.go:31] will retry after 488.432481ms: waiting for domain to come up
	I1008 14:09:45.659981  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:09:45.660707  362613 main.go:141] libmachine: (addons-527125) DBG | no network interface addresses found for domain addons-527125 (source=lease)
	I1008 14:09:45.660747  362613 main.go:141] libmachine: (addons-527125) DBG | trying to list again with source=arp
	I1008 14:09:45.661016  362613 main.go:141] libmachine: (addons-527125) DBG | unable to find current IP address of domain addons-527125 in network mk-addons-527125 (interfaces detected: [])
	I1008 14:09:45.661044  362613 main.go:141] libmachine: (addons-527125) DBG | I1008 14:09:45.660968  362641 retry.go:31] will retry after 890.102707ms: waiting for domain to come up
	I1008 14:09:46.553002  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:09:46.553633  362613 main.go:141] libmachine: (addons-527125) DBG | no network interface addresses found for domain addons-527125 (source=lease)
	I1008 14:09:46.553684  362613 main.go:141] libmachine: (addons-527125) DBG | trying to list again with source=arp
	I1008 14:09:46.553982  362613 main.go:141] libmachine: (addons-527125) DBG | unable to find current IP address of domain addons-527125 in network mk-addons-527125 (interfaces detected: [])
	I1008 14:09:46.554031  362613 main.go:141] libmachine: (addons-527125) DBG | I1008 14:09:46.553966  362641 retry.go:31] will retry after 819.867625ms: waiting for domain to come up
	I1008 14:09:47.375340  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:09:47.375849  362613 main.go:141] libmachine: (addons-527125) DBG | no network interface addresses found for domain addons-527125 (source=lease)
	I1008 14:09:47.375873  362613 main.go:141] libmachine: (addons-527125) DBG | trying to list again with source=arp
	I1008 14:09:47.376203  362613 main.go:141] libmachine: (addons-527125) DBG | unable to find current IP address of domain addons-527125 in network mk-addons-527125 (interfaces detected: [])
	I1008 14:09:47.376262  362613 main.go:141] libmachine: (addons-527125) DBG | I1008 14:09:47.376188  362641 retry.go:31] will retry after 976.117788ms: waiting for domain to come up
	I1008 14:09:48.353708  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:09:48.354274  362613 main.go:141] libmachine: (addons-527125) DBG | no network interface addresses found for domain addons-527125 (source=lease)
	I1008 14:09:48.354310  362613 main.go:141] libmachine: (addons-527125) DBG | trying to list again with source=arp
	I1008 14:09:48.354558  362613 main.go:141] libmachine: (addons-527125) DBG | unable to find current IP address of domain addons-527125 in network mk-addons-527125 (interfaces detected: [])
	I1008 14:09:48.354586  362613 main.go:141] libmachine: (addons-527125) DBG | I1008 14:09:48.354540  362641 retry.go:31] will retry after 1.500376225s: waiting for domain to come up
	I1008 14:09:49.856990  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:09:49.857659  362613 main.go:141] libmachine: (addons-527125) DBG | no network interface addresses found for domain addons-527125 (source=lease)
	I1008 14:09:49.857682  362613 main.go:141] libmachine: (addons-527125) DBG | trying to list again with source=arp
	I1008 14:09:49.858024  362613 main.go:141] libmachine: (addons-527125) DBG | unable to find current IP address of domain addons-527125 in network mk-addons-527125 (interfaces detected: [])
	I1008 14:09:49.858097  362613 main.go:141] libmachine: (addons-527125) DBG | I1008 14:09:49.858023  362641 retry.go:31] will retry after 1.512239805s: waiting for domain to come up
	I1008 14:09:51.372402  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:09:51.372969  362613 main.go:141] libmachine: (addons-527125) DBG | no network interface addresses found for domain addons-527125 (source=lease)
	I1008 14:09:51.373001  362613 main.go:141] libmachine: (addons-527125) DBG | trying to list again with source=arp
	I1008 14:09:51.373224  362613 main.go:141] libmachine: (addons-527125) DBG | unable to find current IP address of domain addons-527125 in network mk-addons-527125 (interfaces detected: [])
	I1008 14:09:51.373312  362613 main.go:141] libmachine: (addons-527125) DBG | I1008 14:09:51.373212  362641 retry.go:31] will retry after 1.849421069s: waiting for domain to come up
	I1008 14:09:53.225054  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:09:53.225746  362613 main.go:141] libmachine: (addons-527125) DBG | no network interface addresses found for domain addons-527125 (source=lease)
	I1008 14:09:53.225777  362613 main.go:141] libmachine: (addons-527125) DBG | trying to list again with source=arp
	I1008 14:09:53.226089  362613 main.go:141] libmachine: (addons-527125) DBG | unable to find current IP address of domain addons-527125 in network mk-addons-527125 (interfaces detected: [])
	I1008 14:09:53.226134  362613 main.go:141] libmachine: (addons-527125) DBG | I1008 14:09:53.226086  362641 retry.go:31] will retry after 2.182925135s: waiting for domain to come up
	I1008 14:09:55.411660  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:09:55.412240  362613 main.go:141] libmachine: (addons-527125) DBG | no network interface addresses found for domain addons-527125 (source=lease)
	I1008 14:09:55.412266  362613 main.go:141] libmachine: (addons-527125) DBG | trying to list again with source=arp
	I1008 14:09:55.412642  362613 main.go:141] libmachine: (addons-527125) DBG | unable to find current IP address of domain addons-527125 in network mk-addons-527125 (interfaces detected: [])
	I1008 14:09:55.412727  362613 main.go:141] libmachine: (addons-527125) DBG | I1008 14:09:55.412651  362641 retry.go:31] will retry after 3.209499547s: waiting for domain to come up
	I1008 14:09:58.625944  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:09:58.626570  362613 main.go:141] libmachine: (addons-527125) found domain IP: 192.168.39.51
	I1008 14:09:58.626598  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has current primary IP address 192.168.39.51 and MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:09:58.626603  362613 main.go:141] libmachine: (addons-527125) reserving static IP address...
	I1008 14:09:58.627000  362613 main.go:141] libmachine: (addons-527125) DBG | unable to find host DHCP lease matching {name: "addons-527125", mac: "52:54:00:74:e6:8d", ip: "192.168.39.51"} in network mk-addons-527125
	I1008 14:09:58.823120  362613 main.go:141] libmachine: (addons-527125) reserved static IP address 192.168.39.51 for domain addons-527125
	I1008 14:09:58.823146  362613 main.go:141] libmachine: (addons-527125) waiting for SSH...
	I1008 14:09:58.823155  362613 main.go:141] libmachine: (addons-527125) DBG | Getting to WaitForSSH function...
	I1008 14:09:58.826096  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:09:58.826605  362613 main.go:141] libmachine: (addons-527125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e6:8d", ip: ""} in network mk-addons-527125: {Iface:virbr1 ExpiryTime:2025-10-08 15:09:57 +0000 UTC Type:0 Mac:52:54:00:74:e6:8d Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:minikube Clientid:01:52:54:00:74:e6:8d}
	I1008 14:09:58.826644  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined IP address 192.168.39.51 and MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:09:58.826910  362613 main.go:141] libmachine: (addons-527125) DBG | Using SSH client type: external
	I1008 14:09:58.826939  362613 main.go:141] libmachine: (addons-527125) DBG | Using SSH private key: /home/jenkins/minikube-integration/21681-357044/.minikube/machines/addons-527125/id_rsa (-rw-------)
	I1008 14:09:58.826978  362613 main.go:141] libmachine: (addons-527125) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21681-357044/.minikube/machines/addons-527125/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 14:09:58.826999  362613 main.go:141] libmachine: (addons-527125) DBG | About to run SSH command:
	I1008 14:09:58.827011  362613 main.go:141] libmachine: (addons-527125) DBG | exit 0
	I1008 14:09:58.963491  362613 main.go:141] libmachine: (addons-527125) DBG | SSH cmd err, output: <nil>: 
	I1008 14:09:58.963786  362613 main.go:141] libmachine: (addons-527125) domain creation complete
	I1008 14:09:58.964295  362613 main.go:141] libmachine: (addons-527125) Calling .GetConfigRaw
	I1008 14:09:58.964982  362613 main.go:141] libmachine: (addons-527125) Calling .DriverName
	I1008 14:09:58.965224  362613 main.go:141] libmachine: (addons-527125) Calling .DriverName
	I1008 14:09:58.965438  362613 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1008 14:09:58.965455  362613 main.go:141] libmachine: (addons-527125) Calling .GetState
	I1008 14:09:58.967060  362613 main.go:141] libmachine: Detecting operating system of created instance...
	I1008 14:09:58.967079  362613 main.go:141] libmachine: Waiting for SSH to be available...
	I1008 14:09:58.967087  362613 main.go:141] libmachine: Getting to WaitForSSH function...
	I1008 14:09:58.967094  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHHostname
	I1008 14:09:58.969835  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:09:58.970364  362613 main.go:141] libmachine: (addons-527125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e6:8d", ip: ""} in network mk-addons-527125: {Iface:virbr1 ExpiryTime:2025-10-08 15:09:57 +0000 UTC Type:0 Mac:52:54:00:74:e6:8d Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-527125 Clientid:01:52:54:00:74:e6:8d}
	I1008 14:09:58.970397  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined IP address 192.168.39.51 and MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:09:58.970613  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHPort
	I1008 14:09:58.970836  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHKeyPath
	I1008 14:09:58.971002  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHKeyPath
	I1008 14:09:58.971147  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHUsername
	I1008 14:09:58.971313  362613 main.go:141] libmachine: Using SSH client type: native
	I1008 14:09:58.971603  362613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I1008 14:09:58.971619  362613 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1008 14:09:59.088710  362613 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 14:09:59.088741  362613 main.go:141] libmachine: Detecting the provisioner...
	I1008 14:09:59.088753  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHHostname
	I1008 14:09:59.091977  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:09:59.092301  362613 main.go:141] libmachine: (addons-527125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e6:8d", ip: ""} in network mk-addons-527125: {Iface:virbr1 ExpiryTime:2025-10-08 15:09:57 +0000 UTC Type:0 Mac:52:54:00:74:e6:8d Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-527125 Clientid:01:52:54:00:74:e6:8d}
	I1008 14:09:59.092336  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined IP address 192.168.39.51 and MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:09:59.092518  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHPort
	I1008 14:09:59.092779  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHKeyPath
	I1008 14:09:59.092981  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHKeyPath
	I1008 14:09:59.093154  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHUsername
	I1008 14:09:59.093284  362613 main.go:141] libmachine: Using SSH client type: native
	I1008 14:09:59.093541  362613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I1008 14:09:59.093554  362613 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1008 14:09:59.210786  362613 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1008 14:09:59.210903  362613 main.go:141] libmachine: found compatible host: buildroot
	I1008 14:09:59.210918  362613 main.go:141] libmachine: Provisioning with buildroot...
	I1008 14:09:59.210926  362613 main.go:141] libmachine: (addons-527125) Calling .GetMachineName
	I1008 14:09:59.211334  362613 buildroot.go:166] provisioning hostname "addons-527125"
	I1008 14:09:59.211392  362613 main.go:141] libmachine: (addons-527125) Calling .GetMachineName
	I1008 14:09:59.211654  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHHostname
	I1008 14:09:59.215661  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:09:59.216126  362613 main.go:141] libmachine: (addons-527125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e6:8d", ip: ""} in network mk-addons-527125: {Iface:virbr1 ExpiryTime:2025-10-08 15:09:57 +0000 UTC Type:0 Mac:52:54:00:74:e6:8d Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-527125 Clientid:01:52:54:00:74:e6:8d}
	I1008 14:09:59.216165  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined IP address 192.168.39.51 and MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:09:59.216389  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHPort
	I1008 14:09:59.216667  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHKeyPath
	I1008 14:09:59.216886  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHKeyPath
	I1008 14:09:59.217058  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHUsername
	I1008 14:09:59.217220  362613 main.go:141] libmachine: Using SSH client type: native
	I1008 14:09:59.217487  362613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I1008 14:09:59.217501  362613 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-527125 && echo "addons-527125" | sudo tee /etc/hostname
	I1008 14:09:59.357184  362613 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-527125
	
	I1008 14:09:59.357214  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHHostname
	I1008 14:09:59.361376  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:09:59.361902  362613 main.go:141] libmachine: (addons-527125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e6:8d", ip: ""} in network mk-addons-527125: {Iface:virbr1 ExpiryTime:2025-10-08 15:09:57 +0000 UTC Type:0 Mac:52:54:00:74:e6:8d Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-527125 Clientid:01:52:54:00:74:e6:8d}
	I1008 14:09:59.361945  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined IP address 192.168.39.51 and MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:09:59.362130  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHPort
	I1008 14:09:59.362341  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHKeyPath
	I1008 14:09:59.362560  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHKeyPath
	I1008 14:09:59.362729  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHUsername
	I1008 14:09:59.362894  362613 main.go:141] libmachine: Using SSH client type: native
	I1008 14:09:59.363235  362613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I1008 14:09:59.363266  362613 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-527125' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-527125/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-527125' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 14:09:59.494789  362613 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 14:09:59.494841  362613 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21681-357044/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-357044/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-357044/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-357044/.minikube}
	I1008 14:09:59.494874  362613 buildroot.go:174] setting up certificates
	I1008 14:09:59.494893  362613 provision.go:84] configureAuth start
	I1008 14:09:59.494912  362613 main.go:141] libmachine: (addons-527125) Calling .GetMachineName
	I1008 14:09:59.495258  362613 main.go:141] libmachine: (addons-527125) Calling .GetIP
	I1008 14:09:59.498493  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:09:59.498993  362613 main.go:141] libmachine: (addons-527125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e6:8d", ip: ""} in network mk-addons-527125: {Iface:virbr1 ExpiryTime:2025-10-08 15:09:57 +0000 UTC Type:0 Mac:52:54:00:74:e6:8d Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-527125 Clientid:01:52:54:00:74:e6:8d}
	I1008 14:09:59.499017  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined IP address 192.168.39.51 and MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:09:59.499315  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHHostname
	I1008 14:09:59.502233  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:09:59.502861  362613 main.go:141] libmachine: (addons-527125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e6:8d", ip: ""} in network mk-addons-527125: {Iface:virbr1 ExpiryTime:2025-10-08 15:09:57 +0000 UTC Type:0 Mac:52:54:00:74:e6:8d Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-527125 Clientid:01:52:54:00:74:e6:8d}
	I1008 14:09:59.502890  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined IP address 192.168.39.51 and MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:09:59.503203  362613 provision.go:143] copyHostCerts
	I1008 14:09:59.503312  362613 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-357044/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-357044/.minikube/ca.pem (1082 bytes)
	I1008 14:09:59.503531  362613 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-357044/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-357044/.minikube/cert.pem (1123 bytes)
	I1008 14:09:59.503703  362613 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-357044/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-357044/.minikube/key.pem (1675 bytes)
	I1008 14:09:59.503845  362613 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-357044/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-357044/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-357044/.minikube/certs/ca-key.pem org=jenkins.addons-527125 san=[127.0.0.1 192.168.39.51 addons-527125 localhost minikube]
	I1008 14:09:59.615159  362613 provision.go:177] copyRemoteCerts
	I1008 14:09:59.615224  362613 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 14:09:59.615250  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHHostname
	I1008 14:09:59.618537  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:09:59.618990  362613 main.go:141] libmachine: (addons-527125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e6:8d", ip: ""} in network mk-addons-527125: {Iface:virbr1 ExpiryTime:2025-10-08 15:09:57 +0000 UTC Type:0 Mac:52:54:00:74:e6:8d Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-527125 Clientid:01:52:54:00:74:e6:8d}
	I1008 14:09:59.619027  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined IP address 192.168.39.51 and MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:09:59.619285  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHPort
	I1008 14:09:59.619524  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHKeyPath
	I1008 14:09:59.619731  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHUsername
	I1008 14:09:59.619967  362613 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/addons-527125/id_rsa Username:docker}
	I1008 14:09:59.710919  362613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 14:09:59.743004  362613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1008 14:09:59.775162  362613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 14:09:59.809334  362613 provision.go:87] duration metric: took 314.420286ms to configureAuth
	I1008 14:09:59.809382  362613 buildroot.go:189] setting minikube options for container-runtime
	I1008 14:09:59.809611  362613 config.go:182] Loaded profile config "addons-527125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 14:09:59.809700  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHHostname
	I1008 14:09:59.813407  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:09:59.813889  362613 main.go:141] libmachine: (addons-527125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e6:8d", ip: ""} in network mk-addons-527125: {Iface:virbr1 ExpiryTime:2025-10-08 15:09:57 +0000 UTC Type:0 Mac:52:54:00:74:e6:8d Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-527125 Clientid:01:52:54:00:74:e6:8d}
	I1008 14:09:59.813930  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined IP address 192.168.39.51 and MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:09:59.814135  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHPort
	I1008 14:09:59.814432  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHKeyPath
	I1008 14:09:59.814681  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHKeyPath
	I1008 14:09:59.814870  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHUsername
	I1008 14:09:59.815110  362613 main.go:141] libmachine: Using SSH client type: native
	I1008 14:09:59.815372  362613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I1008 14:09:59.815392  362613 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 14:10:00.342134  362613 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 14:10:00.342181  362613 main.go:141] libmachine: Checking connection to Docker...
	I1008 14:10:00.342192  362613 main.go:141] libmachine: (addons-527125) Calling .GetURL
	I1008 14:10:00.343881  362613 main.go:141] libmachine: (addons-527125) DBG | using libvirt version 8000000
	I1008 14:10:00.346923  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:00.347277  362613 main.go:141] libmachine: (addons-527125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e6:8d", ip: ""} in network mk-addons-527125: {Iface:virbr1 ExpiryTime:2025-10-08 15:09:57 +0000 UTC Type:0 Mac:52:54:00:74:e6:8d Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-527125 Clientid:01:52:54:00:74:e6:8d}
	I1008 14:10:00.347311  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined IP address 192.168.39.51 and MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:00.347462  362613 main.go:141] libmachine: Docker is up and running!
	I1008 14:10:00.347479  362613 main.go:141] libmachine: Reticulating splines...
	I1008 14:10:00.347488  362613 client.go:171] duration metric: took 19.428159897s to LocalClient.Create
	I1008 14:10:00.347521  362613 start.go:167] duration metric: took 19.428241766s to libmachine.API.Create "addons-527125"
	I1008 14:10:00.347534  362613 start.go:293] postStartSetup for "addons-527125" (driver="kvm2")
	I1008 14:10:00.347548  362613 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 14:10:00.347574  362613 main.go:141] libmachine: (addons-527125) Calling .DriverName
	I1008 14:10:00.347864  362613 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 14:10:00.347896  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHHostname
	I1008 14:10:00.350441  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:00.351011  362613 main.go:141] libmachine: (addons-527125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e6:8d", ip: ""} in network mk-addons-527125: {Iface:virbr1 ExpiryTime:2025-10-08 15:09:57 +0000 UTC Type:0 Mac:52:54:00:74:e6:8d Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-527125 Clientid:01:52:54:00:74:e6:8d}
	I1008 14:10:00.351041  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined IP address 192.168.39.51 and MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:00.351254  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHPort
	I1008 14:10:00.351462  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHKeyPath
	I1008 14:10:00.351706  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHUsername
	I1008 14:10:00.351926  362613 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/addons-527125/id_rsa Username:docker}
	I1008 14:10:00.441541  362613 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 14:10:00.446672  362613 info.go:137] Remote host: Buildroot 2025.02
	I1008 14:10:00.446708  362613 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-357044/.minikube/addons for local assets ...
	I1008 14:10:00.446788  362613 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-357044/.minikube/files for local assets ...
	I1008 14:10:00.446826  362613 start.go:296] duration metric: took 99.283925ms for postStartSetup
	I1008 14:10:00.446871  362613 main.go:141] libmachine: (addons-527125) Calling .GetConfigRaw
	I1008 14:10:00.447587  362613 main.go:141] libmachine: (addons-527125) Calling .GetIP
	I1008 14:10:00.450894  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:00.451423  362613 main.go:141] libmachine: (addons-527125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e6:8d", ip: ""} in network mk-addons-527125: {Iface:virbr1 ExpiryTime:2025-10-08 15:09:57 +0000 UTC Type:0 Mac:52:54:00:74:e6:8d Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-527125 Clientid:01:52:54:00:74:e6:8d}
	I1008 14:10:00.451446  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined IP address 192.168.39.51 and MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:00.451857  362613 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/config.json ...
	I1008 14:10:00.452086  362613 start.go:128] duration metric: took 19.551085093s to createHost
	I1008 14:10:00.452114  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHHostname
	I1008 14:10:00.455035  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:00.455422  362613 main.go:141] libmachine: (addons-527125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e6:8d", ip: ""} in network mk-addons-527125: {Iface:virbr1 ExpiryTime:2025-10-08 15:09:57 +0000 UTC Type:0 Mac:52:54:00:74:e6:8d Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-527125 Clientid:01:52:54:00:74:e6:8d}
	I1008 14:10:00.455456  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined IP address 192.168.39.51 and MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:00.455610  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHPort
	I1008 14:10:00.455829  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHKeyPath
	I1008 14:10:00.455978  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHKeyPath
	I1008 14:10:00.456141  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHUsername
	I1008 14:10:00.456367  362613 main.go:141] libmachine: Using SSH client type: native
	I1008 14:10:00.456636  362613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I1008 14:10:00.456652  362613 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 14:10:00.574130  362613 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759932600.545265039
	
	I1008 14:10:00.574167  362613 fix.go:216] guest clock: 1759932600.545265039
	I1008 14:10:00.574175  362613 fix.go:229] Guest: 2025-10-08 14:10:00.545265039 +0000 UTC Remote: 2025-10-08 14:10:00.452100478 +0000 UTC m=+19.675403323 (delta=93.164561ms)
	I1008 14:10:00.574198  362613 fix.go:200] guest clock delta is within tolerance: 93.164561ms
	I1008 14:10:00.574204  362613 start.go:83] releasing machines lock for "addons-527125", held for 19.673311899s
	I1008 14:10:00.574230  362613 main.go:141] libmachine: (addons-527125) Calling .DriverName
	I1008 14:10:00.574614  362613 main.go:141] libmachine: (addons-527125) Calling .GetIP
	I1008 14:10:00.577950  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:00.578441  362613 main.go:141] libmachine: (addons-527125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e6:8d", ip: ""} in network mk-addons-527125: {Iface:virbr1 ExpiryTime:2025-10-08 15:09:57 +0000 UTC Type:0 Mac:52:54:00:74:e6:8d Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-527125 Clientid:01:52:54:00:74:e6:8d}
	I1008 14:10:00.578474  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined IP address 192.168.39.51 and MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:00.578685  362613 main.go:141] libmachine: (addons-527125) Calling .DriverName
	I1008 14:10:00.579234  362613 main.go:141] libmachine: (addons-527125) Calling .DriverName
	I1008 14:10:00.579476  362613 main.go:141] libmachine: (addons-527125) Calling .DriverName
	I1008 14:10:00.579610  362613 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 14:10:00.579656  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHHostname
	I1008 14:10:00.579752  362613 ssh_runner.go:195] Run: cat /version.json
	I1008 14:10:00.579781  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHHostname
	I1008 14:10:00.583066  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:00.583172  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:00.583519  362613 main.go:141] libmachine: (addons-527125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e6:8d", ip: ""} in network mk-addons-527125: {Iface:virbr1 ExpiryTime:2025-10-08 15:09:57 +0000 UTC Type:0 Mac:52:54:00:74:e6:8d Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-527125 Clientid:01:52:54:00:74:e6:8d}
	I1008 14:10:00.583546  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined IP address 192.168.39.51 and MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:00.583578  362613 main.go:141] libmachine: (addons-527125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e6:8d", ip: ""} in network mk-addons-527125: {Iface:virbr1 ExpiryTime:2025-10-08 15:09:57 +0000 UTC Type:0 Mac:52:54:00:74:e6:8d Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-527125 Clientid:01:52:54:00:74:e6:8d}
	I1008 14:10:00.583593  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined IP address 192.168.39.51 and MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:00.583769  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHPort
	I1008 14:10:00.583787  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHPort
	I1008 14:10:00.583974  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHKeyPath
	I1008 14:10:00.584157  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHKeyPath
	I1008 14:10:00.584161  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHUsername
	I1008 14:10:00.584307  362613 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/addons-527125/id_rsa Username:docker}
	I1008 14:10:00.584380  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHUsername
	I1008 14:10:00.584536  362613 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/addons-527125/id_rsa Username:docker}
	I1008 14:10:00.700433  362613 ssh_runner.go:195] Run: systemctl --version
	I1008 14:10:00.707248  362613 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 14:10:00.869679  362613 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 14:10:00.876727  362613 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 14:10:00.876800  362613 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 14:10:00.897029  362613 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 14:10:00.897059  362613 start.go:495] detecting cgroup driver to use...
	I1008 14:10:00.897126  362613 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 14:10:00.915515  362613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 14:10:00.933060  362613 docker.go:218] disabling cri-docker service (if available) ...
	I1008 14:10:00.933127  362613 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 14:10:00.950757  362613 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 14:10:00.972452  362613 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 14:10:01.124508  362613 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 14:10:01.335168  362613 docker.go:234] disabling docker service ...
	I1008 14:10:01.335305  362613 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 14:10:01.352254  362613 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 14:10:01.368588  362613 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 14:10:01.527478  362613 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 14:10:01.675487  362613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 14:10:01.692871  362613 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 14:10:01.716917  362613 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 14:10:01.716994  362613 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:10:01.730180  362613 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 14:10:01.730255  362613 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:10:01.743186  362613 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:10:01.756125  362613 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:10:01.768999  362613 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 14:10:01.782775  362613 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:10:01.796095  362613 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:10:01.819750  362613 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 14:10:01.833272  362613 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 14:10:01.845438  362613 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 14:10:01.845503  362613 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 14:10:01.867745  362613 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 14:10:01.880860  362613 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:10:02.025192  362613 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 14:10:02.138044  362613 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 14:10:02.138171  362613 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 14:10:02.143865  362613 start.go:563] Will wait 60s for crictl version
	I1008 14:10:02.143948  362613 ssh_runner.go:195] Run: which crictl
	I1008 14:10:02.148461  362613 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 14:10:02.191958  362613 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 14:10:02.192072  362613 ssh_runner.go:195] Run: crio --version
	I1008 14:10:02.221794  362613 ssh_runner.go:195] Run: crio --version
	I1008 14:10:02.255784  362613 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1008 14:10:02.257118  362613 main.go:141] libmachine: (addons-527125) Calling .GetIP
	I1008 14:10:02.260655  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:02.261152  362613 main.go:141] libmachine: (addons-527125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e6:8d", ip: ""} in network mk-addons-527125: {Iface:virbr1 ExpiryTime:2025-10-08 15:09:57 +0000 UTC Type:0 Mac:52:54:00:74:e6:8d Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-527125 Clientid:01:52:54:00:74:e6:8d}
	I1008 14:10:02.261215  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined IP address 192.168.39.51 and MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:02.261544  362613 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1008 14:10:02.266285  362613 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 14:10:02.281796  362613 kubeadm.go:883] updating cluster {Name:addons-527125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-527125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.51 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 14:10:02.281921  362613 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 14:10:02.281970  362613 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 14:10:02.318143  362613 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1008 14:10:02.318252  362613 ssh_runner.go:195] Run: which lz4
	I1008 14:10:02.323045  362613 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1008 14:10:02.328123  362613 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1008 14:10:02.328176  362613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1008 14:10:03.876097  362613 crio.go:462] duration metric: took 1.553085209s to copy over tarball
	I1008 14:10:03.876183  362613 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1008 14:10:05.555878  362613 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.67965158s)
	I1008 14:10:05.555926  362613 crio.go:469] duration metric: took 1.67979262s to extract the tarball
	I1008 14:10:05.555937  362613 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1008 14:10:05.597552  362613 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 14:10:05.646511  362613 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 14:10:05.646538  362613 cache_images.go:85] Images are preloaded, skipping loading
	I1008 14:10:05.646546  362613 kubeadm.go:934] updating node { 192.168.39.51 8443 v1.34.1 crio true true} ...
	I1008 14:10:05.646660  362613 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-527125 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-527125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 14:10:05.646755  362613 ssh_runner.go:195] Run: crio config
	I1008 14:10:05.693782  362613 cni.go:84] Creating CNI manager for ""
	I1008 14:10:05.693811  362613 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 14:10:05.693833  362613 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 14:10:05.693861  362613 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.51 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-527125 NodeName:addons-527125 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 14:10:05.694002  362613 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-527125"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.51"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.51"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 14:10:05.694077  362613 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 14:10:05.709502  362613 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 14:10:05.709590  362613 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 14:10:05.723718  362613 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1008 14:10:05.744991  362613 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 14:10:05.766841  362613 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1008 14:10:05.787788  362613 ssh_runner.go:195] Run: grep 192.168.39.51	control-plane.minikube.internal$ /etc/hosts
	I1008 14:10:05.792477  362613 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 14:10:05.808350  362613 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:10:05.953068  362613 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 14:10:05.975048  362613 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125 for IP: 192.168.39.51
	I1008 14:10:05.975088  362613 certs.go:195] generating shared ca certs ...
	I1008 14:10:05.975115  362613 certs.go:227] acquiring lock for ca certs: {Name:mk0e7909a623394743b0dc10595ebb34d09a814f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:10:05.975314  362613 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-357044/.minikube/ca.key
	I1008 14:10:06.207871  362613 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-357044/.minikube/ca.crt ...
	I1008 14:10:06.207904  362613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-357044/.minikube/ca.crt: {Name:mka682cac282fb7129229ed7ebf7e743baa0754c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:10:06.208126  362613 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-357044/.minikube/ca.key ...
	I1008 14:10:06.208145  362613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-357044/.minikube/ca.key: {Name:mka7f6547f69148b291c454cc49da3781a82c12c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:10:06.208253  362613 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-357044/.minikube/proxy-client-ca.key
	I1008 14:10:06.431575  362613 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-357044/.minikube/proxy-client-ca.crt ...
	I1008 14:10:06.431608  362613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-357044/.minikube/proxy-client-ca.crt: {Name:mk278f6b36f073aeeb3e37fa3572492542e60e0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:10:06.431826  362613 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-357044/.minikube/proxy-client-ca.key ...
	I1008 14:10:06.431849  362613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-357044/.minikube/proxy-client-ca.key: {Name:mka13c8d581cd87648ea62aa8d70994f1d266af7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:10:06.432099  362613 certs.go:257] generating profile certs ...
	I1008 14:10:06.432197  362613 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/client.key
	I1008 14:10:06.432229  362613 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/client.crt with IP's: []
	I1008 14:10:06.572183  362613 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/client.crt ...
	I1008 14:10:06.572217  362613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/client.crt: {Name:mkd4b2688b3aafa86328cf023f516611a0bae1ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:10:06.572457  362613 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/client.key ...
	I1008 14:10:06.572483  362613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/client.key: {Name:mkb9fa599d85739416f51a735a80126e1553be83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:10:06.572621  362613 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/apiserver.key.2f67ccea
	I1008 14:10:06.572644  362613 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/apiserver.crt.2f67ccea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.51]
	I1008 14:10:06.605104  362613 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/apiserver.crt.2f67ccea ...
	I1008 14:10:06.605139  362613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/apiserver.crt.2f67ccea: {Name:mk9848036b939168909bd5a27d16b3b7654281ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:10:06.605345  362613 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/apiserver.key.2f67ccea ...
	I1008 14:10:06.605383  362613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/apiserver.key.2f67ccea: {Name:mka8ec8af8285ce286412876af5cee6652773f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:10:06.605508  362613 certs.go:382] copying /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/apiserver.crt.2f67ccea -> /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/apiserver.crt
	I1008 14:10:06.605621  362613 certs.go:386] copying /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/apiserver.key.2f67ccea -> /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/apiserver.key
	I1008 14:10:06.605708  362613 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/proxy-client.key
	I1008 14:10:06.605737  362613 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/proxy-client.crt with IP's: []
	I1008 14:10:06.971962  362613 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/proxy-client.crt ...
	I1008 14:10:06.971993  362613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/proxy-client.crt: {Name:mk8697c4fffbb980bad02fa9355a038ca3a0717b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:10:06.972191  362613 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/proxy-client.key ...
	I1008 14:10:06.972219  362613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/proxy-client.key: {Name:mk344cbad8c025eb827640088f0b63a688f80b71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:10:06.972467  362613 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-357044/.minikube/certs/ca-key.pem (1679 bytes)
	I1008 14:10:06.972513  362613 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-357044/.minikube/certs/ca.pem (1082 bytes)
	I1008 14:10:06.972549  362613 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-357044/.minikube/certs/cert.pem (1123 bytes)
	I1008 14:10:06.972579  362613 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-357044/.minikube/certs/key.pem (1675 bytes)
	I1008 14:10:06.973214  362613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 14:10:07.006060  362613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 14:10:07.038236  362613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 14:10:07.069945  362613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 14:10:07.101223  362613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1008 14:10:07.132951  362613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 14:10:07.163654  362613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 14:10:07.195393  362613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 14:10:07.225706  362613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 14:10:07.257264  362613 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 14:10:07.278162  362613 ssh_runner.go:195] Run: openssl version
	I1008 14:10:07.284663  362613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 14:10:07.297773  362613 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:10:07.302989  362613 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:10 /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:10:07.303052  362613 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 14:10:07.310681  362613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 14:10:07.324238  362613 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 14:10:07.329188  362613 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 14:10:07.329250  362613 kubeadm.go:400] StartCluster: {Name:addons-527125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-527125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.51 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:10:07.329327  362613 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 14:10:07.329414  362613 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 14:10:07.369853  362613 cri.go:89] found id: ""
	I1008 14:10:07.369938  362613 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 14:10:07.382885  362613 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 14:10:07.395780  362613 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 14:10:07.408137  362613 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 14:10:07.408168  362613 kubeadm.go:157] found existing configuration files:
	
	I1008 14:10:07.408219  362613 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 14:10:07.420180  362613 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 14:10:07.420270  362613 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 14:10:07.432825  362613 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 14:10:07.444014  362613 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 14:10:07.444075  362613 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 14:10:07.456226  362613 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 14:10:07.467290  362613 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 14:10:07.467382  362613 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 14:10:07.479647  362613 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 14:10:07.491031  362613 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 14:10:07.491100  362613 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 14:10:07.503528  362613 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1008 14:10:07.556241  362613 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 14:10:07.556314  362613 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 14:10:07.668709  362613 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 14:10:07.668913  362613 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 14:10:07.669041  362613 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 14:10:07.680150  362613 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 14:10:07.802963  362613 out.go:252]   - Generating certificates and keys ...
	I1008 14:10:07.803136  362613 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 14:10:07.803234  362613 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 14:10:08.076574  362613 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 14:10:08.174844  362613 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1008 14:10:08.353216  362613 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1008 14:10:08.637761  362613 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1008 14:10:08.926966  362613 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1008 14:10:08.927158  362613 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-527125 localhost] and IPs [192.168.39.51 127.0.0.1 ::1]
	I1008 14:10:09.440977  362613 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1008 14:10:09.441142  362613 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-527125 localhost] and IPs [192.168.39.51 127.0.0.1 ::1]
	I1008 14:10:09.538497  362613 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 14:10:09.870079  362613 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 14:10:09.948866  362613 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1008 14:10:09.948948  362613 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 14:10:10.057903  362613 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 14:10:10.271197  362613 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 14:10:10.679253  362613 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 14:10:11.060780  362613 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 14:10:11.452246  362613 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 14:10:11.452840  362613 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 14:10:11.457119  362613 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 14:10:11.459003  362613 out.go:252]   - Booting up control plane ...
	I1008 14:10:11.459141  362613 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 14:10:11.459230  362613 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 14:10:11.459287  362613 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 14:10:11.476634  362613 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 14:10:11.476788  362613 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 14:10:11.483497  362613 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 14:10:11.483773  362613 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 14:10:11.483888  362613 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 14:10:11.660096  362613 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 14:10:11.660203  362613 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 14:10:13.161092  362613 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.502181902s
	I1008 14:10:13.163688  362613 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 14:10:13.163821  362613 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.51:8443/livez
	I1008 14:10:13.163970  362613 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 14:10:13.164095  362613 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 14:10:15.847928  362613 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.685370823s
	I1008 14:10:17.161668  362613 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.000265094s
	I1008 14:10:19.162330  362613 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.001641521s
	I1008 14:10:19.180242  362613 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1008 14:10:19.200102  362613 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1008 14:10:19.215990  362613 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1008 14:10:19.216156  362613 kubeadm.go:318] [mark-control-plane] Marking the node addons-527125 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1008 14:10:19.233500  362613 kubeadm.go:318] [bootstrap-token] Using token: pyrx47.jolahbrauxb3uhol
	I1008 14:10:19.234938  362613 out.go:252]   - Configuring RBAC rules ...
	I1008 14:10:19.235100  362613 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1008 14:10:19.239566  362613 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1008 14:10:19.248391  362613 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1008 14:10:19.256677  362613 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1008 14:10:19.263663  362613 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1008 14:10:19.268467  362613 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1008 14:10:19.569469  362613 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1008 14:10:20.040655  362613 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1008 14:10:20.570403  362613 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1008 14:10:20.572369  362613 kubeadm.go:318] 
	I1008 14:10:20.572502  362613 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1008 14:10:20.572522  362613 kubeadm.go:318] 
	I1008 14:10:20.572616  362613 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1008 14:10:20.572630  362613 kubeadm.go:318] 
	I1008 14:10:20.572699  362613 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1008 14:10:20.572777  362613 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1008 14:10:20.572821  362613 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1008 14:10:20.572827  362613 kubeadm.go:318] 
	I1008 14:10:20.572883  362613 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1008 14:10:20.572889  362613 kubeadm.go:318] 
	I1008 14:10:20.572926  362613 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1008 14:10:20.572933  362613 kubeadm.go:318] 
	I1008 14:10:20.572978  362613 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1008 14:10:20.573085  362613 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1008 14:10:20.573212  362613 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1008 14:10:20.573233  362613 kubeadm.go:318] 
	I1008 14:10:20.573322  362613 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1008 14:10:20.573408  362613 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1008 14:10:20.573414  362613 kubeadm.go:318] 
	I1008 14:10:20.573479  362613 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token pyrx47.jolahbrauxb3uhol \
	I1008 14:10:20.573565  362613 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a7287c55c13e2850ef1491b64e09896fe2ad6beb524d6ee9243d7df8a8bd9a14 \
	I1008 14:10:20.573588  362613 kubeadm.go:318] 	--control-plane 
	I1008 14:10:20.573594  362613 kubeadm.go:318] 
	I1008 14:10:20.573670  362613 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1008 14:10:20.573677  362613 kubeadm.go:318] 
	I1008 14:10:20.573796  362613 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token pyrx47.jolahbrauxb3uhol \
	I1008 14:10:20.573924  362613 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a7287c55c13e2850ef1491b64e09896fe2ad6beb524d6ee9243d7df8a8bd9a14 
	I1008 14:10:20.576214  362613 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 14:10:20.576260  362613 cni.go:84] Creating CNI manager for ""
	I1008 14:10:20.576270  362613 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 14:10:20.579115  362613 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1008 14:10:20.580500  362613 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1008 14:10:20.594031  362613 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1008 14:10:20.618872  362613 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 14:10:20.619031  362613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-527125 minikube.k8s.io/updated_at=2025_10_08T14_10_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555 minikube.k8s.io/name=addons-527125 minikube.k8s.io/primary=true
	I1008 14:10:20.619038  362613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 14:10:20.675900  362613 ops.go:34] apiserver oom_adj: -16
	I1008 14:10:20.758109  362613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 14:10:21.258294  362613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 14:10:21.758855  362613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 14:10:22.259034  362613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 14:10:22.758824  362613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 14:10:23.258597  362613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 14:10:23.758299  362613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 14:10:24.258500  362613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 14:10:24.758236  362613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 14:10:24.855836  362613 kubeadm.go:1113] duration metric: took 4.236878424s to wait for elevateKubeSystemPrivileges
	I1008 14:10:24.855891  362613 kubeadm.go:402] duration metric: took 17.526645827s to StartCluster
	I1008 14:10:24.855920  362613 settings.go:142] acquiring lock: {Name:mk117bd4e067de4a07a0962f9cb0a7e9e4347a17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:10:24.856108  362613 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-357044/kubeconfig
	I1008 14:10:24.856824  362613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-357044/kubeconfig: {Name:mk16a3f122b6b062cdcb94a3a6f8de0fc11cf727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 14:10:24.857071  362613 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1008 14:10:24.857114  362613 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.51 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 14:10:24.857177  362613 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1008 14:10:24.857307  362613 addons.go:69] Setting yakd=true in profile "addons-527125"
	I1008 14:10:24.857324  362613 config.go:182] Loaded profile config "addons-527125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 14:10:24.857338  362613 addons.go:69] Setting metrics-server=true in profile "addons-527125"
	I1008 14:10:24.857350  362613 addons.go:238] Setting addon metrics-server=true in "addons-527125"
	I1008 14:10:24.857330  362613 addons.go:238] Setting addon yakd=true in "addons-527125"
	I1008 14:10:24.857386  362613 addons.go:69] Setting default-storageclass=true in profile "addons-527125"
	I1008 14:10:24.857405  362613 host.go:66] Checking if "addons-527125" exists ...
	I1008 14:10:24.857405  362613 host.go:66] Checking if "addons-527125" exists ...
	I1008 14:10:24.857404  362613 addons.go:69] Setting ingress-dns=true in profile "addons-527125"
	I1008 14:10:24.857413  362613 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-527125"
	I1008 14:10:24.857408  362613 addons.go:69] Setting ingress=true in profile "addons-527125"
	I1008 14:10:24.857440  362613 addons.go:69] Setting cloud-spanner=true in profile "addons-527125"
	I1008 14:10:24.857445  362613 addons.go:238] Setting addon ingress=true in "addons-527125"
	I1008 14:10:24.857456  362613 addons.go:238] Setting addon cloud-spanner=true in "addons-527125"
	I1008 14:10:24.857487  362613 host.go:66] Checking if "addons-527125" exists ...
	I1008 14:10:24.857502  362613 host.go:66] Checking if "addons-527125" exists ...
	I1008 14:10:24.857568  362613 addons.go:69] Setting storage-provisioner=true in profile "addons-527125"
	I1008 14:10:24.857593  362613 addons.go:238] Setting addon storage-provisioner=true in "addons-527125"
	I1008 14:10:24.857619  362613 host.go:66] Checking if "addons-527125" exists ...
	I1008 14:10:24.857879  362613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:10:24.857898  362613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:10:24.857915  362613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:10:24.857924  362613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:10:24.857928  362613 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-527125"
	I1008 14:10:24.857934  362613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:10:24.857945  362613 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-527125"
	I1008 14:10:24.857966  362613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:10:24.857985  362613 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-527125"
	I1008 14:10:24.858004  362613 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-527125"
	I1008 14:10:24.858017  362613 addons.go:69] Setting registry=true in profile "addons-527125"
	I1008 14:10:24.858020  362613 addons.go:69] Setting volumesnapshots=true in profile "addons-527125"
	I1008 14:10:24.857970  362613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:10:24.858028  362613 addons.go:238] Setting addon registry=true in "addons-527125"
	I1008 14:10:24.857348  362613 addons.go:69] Setting inspektor-gadget=true in profile "addons-527125"
	I1008 14:10:24.858033  362613 addons.go:238] Setting addon volumesnapshots=true in "addons-527125"
	I1008 14:10:24.858043  362613 addons.go:238] Setting addon inspektor-gadget=true in "addons-527125"
	I1008 14:10:24.858052  362613 host.go:66] Checking if "addons-527125" exists ...
	I1008 14:10:24.858056  362613 host.go:66] Checking if "addons-527125" exists ...
	I1008 14:10:24.858007  362613 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-527125"
	I1008 14:10:24.858072  362613 host.go:66] Checking if "addons-527125" exists ...
	I1008 14:10:24.858082  362613 host.go:66] Checking if "addons-527125" exists ...
	I1008 14:10:24.858219  362613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:10:24.858364  362613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:10:24.858399  362613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:10:24.858404  362613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:10:24.858478  362613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:10:24.858565  362613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:10:24.857390  362613 addons.go:69] Setting gcp-auth=true in profile "addons-527125"
	I1008 14:10:24.858662  362613 mustload.go:65] Loading cluster: addons-527125
	I1008 14:10:24.858884  362613 config.go:182] Loaded profile config "addons-527125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 14:10:24.858347  362613 addons.go:69] Setting registry-creds=true in profile "addons-527125"
	I1008 14:10:24.858981  362613 addons.go:238] Setting addon registry-creds=true in "addons-527125"
	I1008 14:10:24.859027  362613 host.go:66] Checking if "addons-527125" exists ...
	I1008 14:10:24.858058  362613 addons.go:69] Setting volcano=true in profile "addons-527125"
	I1008 14:10:24.859216  362613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:10:24.859240  362613 addons.go:238] Setting addon volcano=true in "addons-527125"
	I1008 14:10:24.859252  362613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:10:24.859271  362613 host.go:66] Checking if "addons-527125" exists ...
	I1008 14:10:24.858482  362613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:10:24.858047  362613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:10:24.859443  362613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:10:24.859472  362613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:10:24.858433  362613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:10:24.859670  362613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:10:24.859683  362613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:10:24.857988  362613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:10:24.859713  362613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:10:24.857426  362613 addons.go:238] Setting addon ingress-dns=true in "addons-527125"
	I1008 14:10:24.859895  362613 host.go:66] Checking if "addons-527125" exists ...
	I1008 14:10:24.858022  362613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:10:24.861516  362613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:10:24.865430  362613 out.go:179] * Verifying Kubernetes components...
	I1008 14:10:24.858023  362613 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-527125"
	I1008 14:10:24.865685  362613 host.go:66] Checking if "addons-527125" exists ...
	I1008 14:10:24.866370  362613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:10:24.866425  362613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:10:24.867030  362613 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 14:10:24.858444  362613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:10:24.867216  362613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:10:24.869407  362613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:10:24.869454  362613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:10:24.858005  362613 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-527125"
	I1008 14:10:24.869874  362613 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-527125"
	I1008 14:10:24.869917  362613 host.go:66] Checking if "addons-527125" exists ...
	I1008 14:10:24.870297  362613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:10:24.870330  362613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:10:24.887467  362613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34765
	I1008 14:10:24.888369  362613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40509
	I1008 14:10:24.889608  362613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33739
	I1008 14:10:24.890537  362613 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:10:24.891280  362613 main.go:141] libmachine: Using API Version  1
	I1008 14:10:24.891339  362613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:10:24.891980  362613 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:10:24.892121  362613 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:10:24.892233  362613 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:10:24.892915  362613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:10:24.892961  362613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:10:24.893345  362613 main.go:141] libmachine: Using API Version  1
	I1008 14:10:24.893526  362613 main.go:141] libmachine: Using API Version  1
	I1008 14:10:24.893547  362613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:10:24.893618  362613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:10:24.894127  362613 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:10:24.894190  362613 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:10:24.894243  362613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40467
	I1008 14:10:24.894601  362613 main.go:141] libmachine: (addons-527125) Calling .GetState
	I1008 14:10:24.895116  362613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:10:24.895156  362613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:10:24.901444  362613 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-527125"
	I1008 14:10:24.901506  362613 host.go:66] Checking if "addons-527125" exists ...
	I1008 14:10:24.901688  362613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44441
	I1008 14:10:24.901941  362613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32813
	I1008 14:10:24.902593  362613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:10:24.902639  362613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:10:24.906452  362613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40765
	I1008 14:10:24.906663  362613 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:10:24.907418  362613 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:10:24.907532  362613 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:10:24.908061  362613 main.go:141] libmachine: Using API Version  1
	I1008 14:10:24.908081  362613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:10:24.908502  362613 main.go:141] libmachine: Using API Version  1
	I1008 14:10:24.908516  362613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:10:24.908780  362613 main.go:141] libmachine: Using API Version  1
	I1008 14:10:24.908798  362613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:10:24.908896  362613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36341
	I1008 14:10:24.909440  362613 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:10:24.909683  362613 main.go:141] libmachine: (addons-527125) Calling .GetState
	I1008 14:10:24.910627  362613 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:10:24.910658  362613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44651
	I1008 14:10:24.912555  362613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39671
	I1008 14:10:24.912719  362613 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:10:24.912763  362613 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:10:24.913614  362613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:10:24.913665  362613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:10:24.914349  362613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:10:24.914709  362613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:10:24.915026  362613 host.go:66] Checking if "addons-527125" exists ...
	I1008 14:10:24.915396  362613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:10:24.915430  362613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:10:24.916035  362613 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:10:24.916910  362613 main.go:141] libmachine: Using API Version  1
	I1008 14:10:24.916928  362613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:10:24.917364  362613 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:10:24.917350  362613 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:10:24.917950  362613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36911
	I1008 14:10:24.918092  362613 main.go:141] libmachine: (addons-527125) Calling .GetState
	I1008 14:10:24.918238  362613 main.go:141] libmachine: Using API Version  1
	I1008 14:10:24.918249  362613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:10:24.918678  362613 main.go:141] libmachine: Using API Version  1
	I1008 14:10:24.918737  362613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:10:24.920151  362613 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:10:24.920232  362613 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:10:24.920755  362613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:10:24.920804  362613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:10:24.921292  362613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38783
	I1008 14:10:24.921684  362613 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:10:24.921904  362613 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:10:24.922350  362613 main.go:141] libmachine: Using API Version  1
	I1008 14:10:24.922378  362613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:10:24.922522  362613 main.go:141] libmachine: Using API Version  1
	I1008 14:10:24.922540  362613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:10:24.922945  362613 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:10:24.923423  362613 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:10:24.923586  362613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:10:24.924101  362613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:10:24.924186  362613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:10:24.924324  362613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:10:24.928983  362613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43585
	I1008 14:10:24.929276  362613 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:10:24.929320  362613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:10:24.929408  362613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:10:24.929638  362613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43053
	I1008 14:10:24.929660  362613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39515
	I1008 14:10:24.929939  362613 main.go:141] libmachine: Using API Version  1
	I1008 14:10:24.929960  362613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:10:24.931634  362613 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:10:24.934285  362613 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:10:24.935463  362613 main.go:141] libmachine: Using API Version  1
	I1008 14:10:24.935487  362613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:10:24.936050  362613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:10:24.936090  362613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:10:24.936459  362613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33377
	I1008 14:10:24.936682  362613 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:10:24.936730  362613 addons.go:238] Setting addon default-storageclass=true in "addons-527125"
	I1008 14:10:24.936772  362613 host.go:66] Checking if "addons-527125" exists ...
	I1008 14:10:24.937119  362613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:10:24.937156  362613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:10:24.937316  362613 main.go:141] libmachine: Using API Version  1
	I1008 14:10:24.937332  362613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:10:24.937767  362613 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:10:24.937774  362613 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:10:24.938380  362613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:10:24.938427  362613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:10:24.944099  362613 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:10:24.944157  362613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37581
	I1008 14:10:24.944653  362613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:10:24.944694  362613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:10:24.944805  362613 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:10:24.946635  362613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41787
	I1008 14:10:24.946721  362613 main.go:141] libmachine: Using API Version  1
	I1008 14:10:24.946739  362613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:10:24.946889  362613 main.go:141] libmachine: Using API Version  1
	I1008 14:10:24.946903  362613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:10:24.947334  362613 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:10:24.947667  362613 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:10:24.947757  362613 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:10:24.948144  362613 main.go:141] libmachine: (addons-527125) Calling .GetState
	I1008 14:10:24.948242  362613 main.go:141] libmachine: Using API Version  1
	I1008 14:10:24.948260  362613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:10:24.949241  362613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:10:24.949291  362613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:10:24.951013  362613 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:10:24.951085  362613 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:10:24.951332  362613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46033
	I1008 14:10:24.954289  362613 main.go:141] libmachine: Using API Version  1
	I1008 14:10:24.954311  362613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:10:24.954454  362613 main.go:141] libmachine: (addons-527125) Calling .DriverName
	I1008 14:10:24.954620  362613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33635
	I1008 14:10:24.955020  362613 main.go:141] libmachine: (addons-527125) Calling .GetState
	I1008 14:10:24.955101  362613 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:10:24.955787  362613 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:10:24.956296  362613 main.go:141] libmachine: Using API Version  1
	I1008 14:10:24.956321  362613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:10:24.956717  362613 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:10:24.956942  362613 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1008 14:10:24.956950  362613 main.go:141] libmachine: (addons-527125) Calling .GetState
	I1008 14:10:24.957496  362613 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:10:24.958192  362613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:10:24.958236  362613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:10:24.958432  362613 main.go:141] libmachine: (addons-527125) Calling .DriverName
	I1008 14:10:24.959017  362613 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1008 14:10:24.959042  362613 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1008 14:10:24.959068  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHHostname
	I1008 14:10:24.959575  362613 main.go:141] libmachine: Using API Version  1
	I1008 14:10:24.959598  362613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:10:24.959234  362613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37519
	I1008 14:10:24.960205  362613 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:10:24.961392  362613 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1008 14:10:24.961485  362613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:10:24.961532  362613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:10:24.963454  362613 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1008 14:10:24.963481  362613 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1008 14:10:24.963504  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHHostname
	I1008 14:10:24.964321  362613 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:10:24.964343  362613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41659
	I1008 14:10:24.964449  362613 main.go:141] libmachine: (addons-527125) Calling .DriverName
	I1008 14:10:24.965248  362613 main.go:141] libmachine: Using API Version  1
	I1008 14:10:24.965268  362613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:10:24.965783  362613 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:10:24.966014  362613 main.go:141] libmachine: (addons-527125) Calling .GetState
	I1008 14:10:24.967957  362613 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:10:24.968002  362613 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1008 14:10:24.969305  362613 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1008 14:10:24.969325  362613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1008 14:10:24.969346  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHHostname
	I1008 14:10:24.970607  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:24.972217  362613 main.go:141] libmachine: Using API Version  1
	I1008 14:10:24.972237  362613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:10:24.973512  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHPort
	I1008 14:10:24.973588  362613 main.go:141] libmachine: (addons-527125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e6:8d", ip: ""} in network mk-addons-527125: {Iface:virbr1 ExpiryTime:2025-10-08 15:09:57 +0000 UTC Type:0 Mac:52:54:00:74:e6:8d Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-527125 Clientid:01:52:54:00:74:e6:8d}
	I1008 14:10:24.973601  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined IP address 192.168.39.51 and MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:24.974578  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHKeyPath
	I1008 14:10:24.974772  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHUsername
	I1008 14:10:24.975452  362613 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:10:24.976325  362613 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/addons-527125/id_rsa Username:docker}
	I1008 14:10:24.976386  362613 main.go:141] libmachine: (addons-527125) Calling .DriverName
	I1008 14:10:24.976709  362613 main.go:141] libmachine: (addons-527125) Calling .DriverName
	I1008 14:10:24.978320  362613 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1008 14:10:24.979665  362613 out.go:179]   - Using image docker.io/registry:3.0.0
	I1008 14:10:24.980816  362613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36019
	I1008 14:10:24.980905  362613 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1008 14:10:24.980918  362613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1008 14:10:24.980941  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHHostname
	I1008 14:10:24.982122  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:24.983001  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:24.983182  362613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39085
	I1008 14:10:24.983646  362613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41893
	I1008 14:10:24.984695  362613 main.go:141] libmachine: (addons-527125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e6:8d", ip: ""} in network mk-addons-527125: {Iface:virbr1 ExpiryTime:2025-10-08 15:09:57 +0000 UTC Type:0 Mac:52:54:00:74:e6:8d Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-527125 Clientid:01:52:54:00:74:e6:8d}
	I1008 14:10:24.984722  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined IP address 192.168.39.51 and MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:24.985598  362613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45939
	I1008 14:10:24.985779  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHPort
	I1008 14:10:24.986005  362613 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:10:24.986171  362613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32795
	I1008 14:10:24.986588  362613 main.go:141] libmachine: Using API Version  1
	I1008 14:10:24.986607  362613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:10:24.986699  362613 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:10:24.987569  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHKeyPath
	I1008 14:10:24.988266  362613 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:10:24.988367  362613 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:10:24.988432  362613 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:10:24.988539  362613 main.go:141] libmachine: Using API Version  1
	I1008 14:10:24.988749  362613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:10:24.989016  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHUsername
	I1008 14:10:24.989113  362613 main.go:141] libmachine: Using API Version  1
	I1008 14:10:24.989141  362613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:10:24.989565  362613 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:10:24.989625  362613 main.go:141] libmachine: (addons-527125) Calling .GetState
	I1008 14:10:24.989666  362613 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:10:24.989901  362613 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:10:24.989937  362613 main.go:141] libmachine: Using API Version  1
	I1008 14:10:24.989951  362613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:10:24.990026  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:24.990175  362613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46477
	I1008 14:10:24.991211  362613 main.go:141] libmachine: (addons-527125) Calling .GetState
	I1008 14:10:24.991233  362613 main.go:141] libmachine: Using API Version  1
	I1008 14:10:24.991249  362613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:10:24.991295  362613 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:10:24.991301  362613 main.go:141] libmachine: (addons-527125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e6:8d", ip: ""} in network mk-addons-527125: {Iface:virbr1 ExpiryTime:2025-10-08 15:09:57 +0000 UTC Type:0 Mac:52:54:00:74:e6:8d Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-527125 Clientid:01:52:54:00:74:e6:8d}
	I1008 14:10:24.991348  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined IP address 192.168.39.51 and MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:24.991379  362613 main.go:141] libmachine: (addons-527125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e6:8d", ip: ""} in network mk-addons-527125: {Iface:virbr1 ExpiryTime:2025-10-08 15:09:57 +0000 UTC Type:0 Mac:52:54:00:74:e6:8d Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-527125 Clientid:01:52:54:00:74:e6:8d}
	I1008 14:10:24.991382  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHPort
	I1008 14:10:24.991374  362613 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/addons-527125/id_rsa Username:docker}
	I1008 14:10:24.991404  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined IP address 192.168.39.51 and MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:24.991575  362613 main.go:141] libmachine: (addons-527125) Calling .GetState
	I1008 14:10:24.991586  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHKeyPath
	I1008 14:10:24.992067  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHPort
	I1008 14:10:24.992073  362613 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:10:24.992137  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHUsername
	I1008 14:10:24.992384  362613 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/addons-527125/id_rsa Username:docker}
	I1008 14:10:24.992616  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHKeyPath
	I1008 14:10:24.992828  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHUsername
	I1008 14:10:24.993031  362613 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/addons-527125/id_rsa Username:docker}
	I1008 14:10:24.993344  362613 main.go:141] libmachine: (addons-527125) Calling .GetState
	I1008 14:10:24.994325  362613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34313
	I1008 14:10:24.994538  362613 main.go:141] libmachine: (addons-527125) Calling .DriverName
	I1008 14:10:24.994952  362613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:10:24.995003  362613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:10:24.995408  362613 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:10:24.995840  362613 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:10:24.996385  362613 main.go:141] libmachine: Using API Version  1
	I1008 14:10:24.996408  362613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:10:24.996918  362613 main.go:141] libmachine: (addons-527125) Calling .DriverName
	I1008 14:10:24.996999  362613 main.go:141] libmachine: (addons-527125) Calling .DriverName
	I1008 14:10:24.997457  362613 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:10:24.997734  362613 main.go:141] libmachine: (addons-527125) Calling .GetState
	I1008 14:10:24.998291  362613 main.go:141] libmachine: Using API Version  1
	I1008 14:10:24.998395  362613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:10:24.998632  362613 main.go:141] libmachine: (addons-527125) Calling .DriverName
	I1008 14:10:24.998836  362613 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1008 14:10:24.998918  362613 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:10:24.998944  362613 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 14:10:24.999038  362613 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1008 14:10:24.999069  362613 main.go:141] libmachine: (addons-527125) Calling .GetState
	I1008 14:10:24.999132  362613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43889
	I1008 14:10:24.999840  362613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36527
	I1008 14:10:25.000176  362613 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:10:25.000198  362613 main.go:141] libmachine: (addons-527125) Calling .DriverName
	I1008 14:10:25.000494  362613 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:10:25.000512  362613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 14:10:25.000531  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHHostname
	I1008 14:10:25.000697  362613 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1008 14:10:25.000711  362613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1008 14:10:25.000729  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHHostname
	I1008 14:10:25.000843  362613 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1008 14:10:25.000857  362613 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1008 14:10:25.000874  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHHostname
	I1008 14:10:25.000875  362613 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1008 14:10:25.001089  362613 main.go:141] libmachine: Using API Version  1
	I1008 14:10:25.001103  362613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:10:25.001173  362613 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:10:25.001651  362613 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:10:25.002082  362613 main.go:141] libmachine: (addons-527125) Calling .GetState
	I1008 14:10:25.002309  362613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33069
	I1008 14:10:25.002494  362613 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1008 14:10:25.002543  362613 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1008 14:10:25.002556  362613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1008 14:10:25.002574  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHHostname
	I1008 14:10:25.002952  362613 main.go:141] libmachine: Using API Version  1
	I1008 14:10:25.003117  362613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:10:25.003263  362613 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:10:25.003275  362613 main.go:141] libmachine: (addons-527125) Calling .DriverName
	I1008 14:10:25.003810  362613 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1008 14:10:25.003835  362613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1008 14:10:25.003854  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHHostname
	I1008 14:10:25.004199  362613 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:10:25.004812  362613 main.go:141] libmachine: Using API Version  1
	I1008 14:10:25.004840  362613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:10:25.005421  362613 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1008 14:10:25.005622  362613 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:10:25.006596  362613 main.go:141] libmachine: (addons-527125) Calling .GetState
	I1008 14:10:25.007264  362613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:10:25.007347  362613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:10:25.008213  362613 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1008 14:10:25.009029  362613 main.go:141] libmachine: (addons-527125) Calling .DriverName
	I1008 14:10:25.009125  362613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36449
	I1008 14:10:25.010410  362613 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:10:25.010870  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:25.011153  362613 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1008 14:10:25.011214  362613 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1008 14:10:25.011524  362613 main.go:141] libmachine: (addons-527125) Calling .DriverName
	I1008 14:10:25.011746  362613 main.go:141] libmachine: Using API Version  1
	I1008 14:10:25.011766  362613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:10:25.012175  362613 main.go:141] libmachine: (addons-527125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e6:8d", ip: ""} in network mk-addons-527125: {Iface:virbr1 ExpiryTime:2025-10-08 15:09:57 +0000 UTC Type:0 Mac:52:54:00:74:e6:8d Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-527125 Clientid:01:52:54:00:74:e6:8d}
	I1008 14:10:25.012194  362613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33993
	I1008 14:10:25.012196  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined IP address 192.168.39.51 and MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:25.012707  362613 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:10:25.012735  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:25.012855  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHPort
	I1008 14:10:25.013118  362613 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:10:25.013348  362613 main.go:141] libmachine: (addons-527125) Calling .GetState
	I1008 14:10:25.013430  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:25.013715  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHKeyPath
	I1008 14:10:25.013790  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:25.013819  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:25.013863  362613 main.go:141] libmachine: (addons-527125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e6:8d", ip: ""} in network mk-addons-527125: {Iface:virbr1 ExpiryTime:2025-10-08 15:09:57 +0000 UTC Type:0 Mac:52:54:00:74:e6:8d Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-527125 Clientid:01:52:54:00:74:e6:8d}
	I1008 14:10:25.013880  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined IP address 192.168.39.51 and MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:25.014090  362613 main.go:141] libmachine: Using API Version  1
	I1008 14:10:25.014104  362613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:10:25.014117  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHUsername
	I1008 14:10:25.014389  362613 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/addons-527125/id_rsa Username:docker}
	I1008 14:10:25.014584  362613 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1008 14:10:25.014604  362613 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1008 14:10:25.014643  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHPort
	I1008 14:10:25.014679  362613 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1008 14:10:25.014868  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHKeyPath
	I1008 14:10:25.014950  362613 main.go:141] libmachine: (addons-527125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e6:8d", ip: ""} in network mk-addons-527125: {Iface:virbr1 ExpiryTime:2025-10-08 15:09:57 +0000 UTC Type:0 Mac:52:54:00:74:e6:8d Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-527125 Clientid:01:52:54:00:74:e6:8d}
	I1008 14:10:25.014969  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined IP address 192.168.39.51 and MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:25.015034  362613 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:10:25.015303  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHUsername
	I1008 14:10:25.015480  362613 main.go:141] libmachine: (addons-527125) Calling .GetState
	I1008 14:10:25.015667  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHPort
	I1008 14:10:25.015735  362613 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/addons-527125/id_rsa Username:docker}
	I1008 14:10:25.015837  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHKeyPath
	I1008 14:10:25.016126  362613 main.go:141] libmachine: (addons-527125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e6:8d", ip: ""} in network mk-addons-527125: {Iface:virbr1 ExpiryTime:2025-10-08 15:09:57 +0000 UTC Type:0 Mac:52:54:00:74:e6:8d Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-527125 Clientid:01:52:54:00:74:e6:8d}
	I1008 14:10:25.016143  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined IP address 192.168.39.51 and MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:25.016214  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHUsername
	I1008 14:10:25.016606  362613 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/addons-527125/id_rsa Username:docker}
	I1008 14:10:25.016702  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHPort
	I1008 14:10:25.016907  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHKeyPath
	I1008 14:10:25.017060  362613 main.go:141] libmachine: (addons-527125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e6:8d", ip: ""} in network mk-addons-527125: {Iface:virbr1 ExpiryTime:2025-10-08 15:09:57 +0000 UTC Type:0 Mac:52:54:00:74:e6:8d Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-527125 Clientid:01:52:54:00:74:e6:8d}
	I1008 14:10:25.017905  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined IP address 192.168.39.51 and MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:25.017130  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHUsername
	I1008 14:10:25.017162  362613 main.go:141] libmachine: (addons-527125) Calling .DriverName
	I1008 14:10:25.017326  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHPort
	I1008 14:10:25.018183  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHKeyPath
	I1008 14:10:25.018301  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHUsername
	I1008 14:10:25.018457  362613 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/addons-527125/id_rsa Username:docker}
	I1008 14:10:25.018559  362613 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/addons-527125/id_rsa Username:docker}
	I1008 14:10:25.018872  362613 main.go:141] libmachine: (addons-527125) Calling .DriverName
	I1008 14:10:25.019169  362613 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1008 14:10:25.019193  362613 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1008 14:10:25.019587  362613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1008 14:10:25.019605  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHHostname
	I1008 14:10:25.020978  362613 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1008 14:10:25.021039  362613 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1008 14:10:25.020979  362613 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1008 14:10:25.023075  362613 out.go:179]   - Using image docker.io/busybox:stable
	I1008 14:10:25.023114  362613 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1008 14:10:25.023182  362613 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1008 14:10:25.023197  362613 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1008 14:10:25.023219  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHHostname
	I1008 14:10:25.023243  362613 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1008 14:10:25.023260  362613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1008 14:10:25.023278  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHHostname
	I1008 14:10:25.024217  362613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36451
	I1008 14:10:25.024604  362613 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1008 14:10:25.024651  362613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1008 14:10:25.024671  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHHostname
	I1008 14:10:25.025062  362613 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:10:25.025450  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:25.025750  362613 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1008 14:10:25.025886  362613 main.go:141] libmachine: Using API Version  1
	I1008 14:10:25.025916  362613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:10:25.026388  362613 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:10:25.026604  362613 main.go:141] libmachine: (addons-527125) Calling .GetState
	I1008 14:10:25.026979  362613 main.go:141] libmachine: (addons-527125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e6:8d", ip: ""} in network mk-addons-527125: {Iface:virbr1 ExpiryTime:2025-10-08 15:09:57 +0000 UTC Type:0 Mac:52:54:00:74:e6:8d Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-527125 Clientid:01:52:54:00:74:e6:8d}
	I1008 14:10:25.027037  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined IP address 192.168.39.51 and MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:25.027493  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHPort
	I1008 14:10:25.027736  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHKeyPath
	I1008 14:10:25.027939  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHUsername
	I1008 14:10:25.028187  362613 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/addons-527125/id_rsa Username:docker}
	I1008 14:10:25.028495  362613 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1008 14:10:25.029946  362613 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1008 14:10:25.029967  362613 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1008 14:10:25.030199  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:25.030240  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHHostname
	I1008 14:10:25.031146  362613 main.go:141] libmachine: (addons-527125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e6:8d", ip: ""} in network mk-addons-527125: {Iface:virbr1 ExpiryTime:2025-10-08 15:09:57 +0000 UTC Type:0 Mac:52:54:00:74:e6:8d Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-527125 Clientid:01:52:54:00:74:e6:8d}
	I1008 14:10:25.031248  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined IP address 192.168.39.51 and MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:25.031277  362613 main.go:141] libmachine: (addons-527125) Calling .DriverName
	I1008 14:10:25.031329  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:25.031344  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:25.031411  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHPort
	I1008 14:10:25.031632  362613 main.go:141] libmachine: Making call to close driver server
	I1008 14:10:25.031650  362613 main.go:141] libmachine: (addons-527125) Calling .Close
	I1008 14:10:25.031854  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHKeyPath
	I1008 14:10:25.032008  362613 main.go:141] libmachine: (addons-527125) DBG | Closing plugin on server side
	I1008 14:10:25.032038  362613 main.go:141] libmachine: Successfully made call to close driver server
	I1008 14:10:25.032046  362613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 14:10:25.032055  362613 main.go:141] libmachine: Making call to close driver server
	I1008 14:10:25.032063  362613 main.go:141] libmachine: (addons-527125) Calling .Close
	I1008 14:10:25.032244  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHUsername
	I1008 14:10:25.032261  362613 main.go:141] libmachine: (addons-527125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e6:8d", ip: ""} in network mk-addons-527125: {Iface:virbr1 ExpiryTime:2025-10-08 15:09:57 +0000 UTC Type:0 Mac:52:54:00:74:e6:8d Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-527125 Clientid:01:52:54:00:74:e6:8d}
	I1008 14:10:25.032326  362613 main.go:141] libmachine: Successfully made call to close driver server
	I1008 14:10:25.032339  362613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 14:10:25.032594  362613 main.go:141] libmachine: (addons-527125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e6:8d", ip: ""} in network mk-addons-527125: {Iface:virbr1 ExpiryTime:2025-10-08 15:09:57 +0000 UTC Type:0 Mac:52:54:00:74:e6:8d Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-527125 Clientid:01:52:54:00:74:e6:8d}
	W1008 14:10:25.032619  362613 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1008 14:10:25.032641  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined IP address 192.168.39.51 and MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:25.032612  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined IP address 192.168.39.51 and MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:25.032747  362613 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/addons-527125/id_rsa Username:docker}
	I1008 14:10:25.033189  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHPort
	I1008 14:10:25.033214  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHPort
	I1008 14:10:25.033414  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHKeyPath
	I1008 14:10:25.033576  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHKeyPath
	I1008 14:10:25.033635  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHUsername
	I1008 14:10:25.033850  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHUsername
	I1008 14:10:25.034127  362613 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/addons-527125/id_rsa Username:docker}
	I1008 14:10:25.034315  362613 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/addons-527125/id_rsa Username:docker}
	I1008 14:10:25.035873  362613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40247
	I1008 14:10:25.035993  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:25.036422  362613 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:10:25.036546  362613 main.go:141] libmachine: (addons-527125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e6:8d", ip: ""} in network mk-addons-527125: {Iface:virbr1 ExpiryTime:2025-10-08 15:09:57 +0000 UTC Type:0 Mac:52:54:00:74:e6:8d Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-527125 Clientid:01:52:54:00:74:e6:8d}
	I1008 14:10:25.036576  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined IP address 192.168.39.51 and MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:25.036779  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHPort
	I1008 14:10:25.036950  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHKeyPath
	I1008 14:10:25.036958  362613 main.go:141] libmachine: Using API Version  1
	I1008 14:10:25.037055  362613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:10:25.037116  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHUsername
	I1008 14:10:25.037268  362613 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/addons-527125/id_rsa Username:docker}
	I1008 14:10:25.037432  362613 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:10:25.037808  362613 main.go:141] libmachine: (addons-527125) Calling .GetState
	I1008 14:10:25.039796  362613 main.go:141] libmachine: (addons-527125) Calling .DriverName
	I1008 14:10:25.040015  362613 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 14:10:25.040031  362613 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 14:10:25.040049  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHHostname
	I1008 14:10:25.043385  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:25.043929  362613 main.go:141] libmachine: (addons-527125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e6:8d", ip: ""} in network mk-addons-527125: {Iface:virbr1 ExpiryTime:2025-10-08 15:09:57 +0000 UTC Type:0 Mac:52:54:00:74:e6:8d Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-527125 Clientid:01:52:54:00:74:e6:8d}
	I1008 14:10:25.043956  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined IP address 192.168.39.51 and MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:25.044167  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHPort
	I1008 14:10:25.044384  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHKeyPath
	I1008 14:10:25.044560  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHUsername
	I1008 14:10:25.044723  362613 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/addons-527125/id_rsa Username:docker}
	W1008 14:10:25.141668  362613 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36288->192.168.39.51:22: read: connection reset by peer
	I1008 14:10:25.141722  362613 retry.go:31] will retry after 309.445583ms: ssh: handshake failed: read tcp 192.168.39.1:36288->192.168.39.51:22: read: connection reset by peer
	I1008 14:10:25.569035  362613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1008 14:10:25.576324  362613 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 14:10:25.576326  362613 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1008 14:10:25.668914  362613 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1008 14:10:25.668938  362613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1008 14:10:25.676604  362613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 14:10:25.742257  362613 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1008 14:10:25.742287  362613 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1008 14:10:25.783261  362613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 14:10:25.886361  362613 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1008 14:10:25.886389  362613 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1008 14:10:25.916256  362613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1008 14:10:25.939323  362613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1008 14:10:25.956028  362613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1008 14:10:25.956285  362613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1008 14:10:25.958218  362613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1008 14:10:25.968065  362613 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1008 14:10:25.968100  362613 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1008 14:10:25.998509  362613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1008 14:10:26.013256  362613 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1008 14:10:26.013292  362613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1008 14:10:26.201521  362613 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1008 14:10:26.201559  362613 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1008 14:10:26.272275  362613 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1008 14:10:26.272315  362613 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1008 14:10:26.542441  362613 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1008 14:10:26.542474  362613 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1008 14:10:26.572684  362613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 14:10:26.581537  362613 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1008 14:10:26.581576  362613 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1008 14:10:26.711829  362613 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1008 14:10:26.711865  362613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1008 14:10:26.858339  362613 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1008 14:10:26.858386  362613 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1008 14:10:27.115102  362613 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 14:10:27.115143  362613 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1008 14:10:27.159929  362613 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1008 14:10:27.159961  362613 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1008 14:10:27.168881  362613 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1008 14:10:27.168904  362613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1008 14:10:27.345093  362613 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1008 14:10:27.345122  362613 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1008 14:10:27.356167  362613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1008 14:10:27.405398  362613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 14:10:27.527014  362613 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1008 14:10:27.527059  362613 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1008 14:10:27.653534  362613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1008 14:10:27.762504  362613 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1008 14:10:27.762544  362613 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1008 14:10:28.010224  362613 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1008 14:10:28.010261  362613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1008 14:10:28.133747  362613 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1008 14:10:28.133780  362613 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1008 14:10:28.706156  362613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1008 14:10:28.794245  362613 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1008 14:10:28.794277  362613 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1008 14:10:29.330829  362613 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1008 14:10:29.330866  362613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1008 14:10:29.647425  362613 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1008 14:10:29.647463  362613 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1008 14:10:30.061883  362613 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1008 14:10:30.061912  362613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1008 14:10:30.353542  362613 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1008 14:10:30.353569  362613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1008 14:10:30.502685  362613 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1008 14:10:30.502722  362613 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1008 14:10:30.682757  362613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.113680139s)
	I1008 14:10:30.682813  362613 main.go:141] libmachine: Making call to close driver server
	I1008 14:10:30.682828  362613 main.go:141] libmachine: (addons-527125) Calling .Close
	I1008 14:10:30.682886  362613 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.1065177s)
	I1008 14:10:30.682940  362613 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.106502042s)
	I1008 14:10:30.682968  362613 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1008 14:10:30.682995  362613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.006355674s)
	I1008 14:10:30.683049  362613 main.go:141] libmachine: Making call to close driver server
	I1008 14:10:30.683062  362613 main.go:141] libmachine: (addons-527125) Calling .Close
	I1008 14:10:30.683247  362613 main.go:141] libmachine: (addons-527125) DBG | Closing plugin on server side
	I1008 14:10:30.683295  362613 main.go:141] libmachine: Successfully made call to close driver server
	I1008 14:10:30.683307  362613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 14:10:30.683321  362613 main.go:141] libmachine: Making call to close driver server
	I1008 14:10:30.683342  362613 main.go:141] libmachine: (addons-527125) Calling .Close
	I1008 14:10:30.683432  362613 main.go:141] libmachine: Successfully made call to close driver server
	I1008 14:10:30.683441  362613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 14:10:30.683451  362613 main.go:141] libmachine: Making call to close driver server
	I1008 14:10:30.683458  362613 main.go:141] libmachine: (addons-527125) Calling .Close
	I1008 14:10:30.683881  362613 main.go:141] libmachine: Successfully made call to close driver server
	I1008 14:10:30.683887  362613 main.go:141] libmachine: (addons-527125) DBG | Closing plugin on server side
	I1008 14:10:30.683899  362613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 14:10:30.683914  362613 main.go:141] libmachine: (addons-527125) DBG | Closing plugin on server side
	I1008 14:10:30.683938  362613 main.go:141] libmachine: Successfully made call to close driver server
	I1008 14:10:30.683945  362613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 14:10:30.684028  362613 node_ready.go:35] waiting up to 6m0s for node "addons-527125" to be "Ready" ...
	I1008 14:10:30.732636  362613 node_ready.go:49] node "addons-527125" is "Ready"
	I1008 14:10:30.732676  362613 node_ready.go:38] duration metric: took 48.619385ms for node "addons-527125" to be "Ready" ...
	I1008 14:10:30.732694  362613 api_server.go:52] waiting for apiserver process to appear ...
	I1008 14:10:30.732750  362613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:10:30.811253  362613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1008 14:10:30.847654  362613 main.go:141] libmachine: Making call to close driver server
	I1008 14:10:30.847684  362613 main.go:141] libmachine: (addons-527125) Calling .Close
	I1008 14:10:30.848036  362613 main.go:141] libmachine: (addons-527125) DBG | Closing plugin on server side
	I1008 14:10:30.848042  362613 main.go:141] libmachine: Successfully made call to close driver server
	I1008 14:10:30.848071  362613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 14:10:31.286762  362613 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-527125" context rescaled to 1 replicas
	I1008 14:10:32.150075  362613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.366771595s)
	I1008 14:10:32.150154  362613 main.go:141] libmachine: Making call to close driver server
	I1008 14:10:32.150169  362613 main.go:141] libmachine: (addons-527125) Calling .Close
	I1008 14:10:32.150476  362613 main.go:141] libmachine: (addons-527125) DBG | Closing plugin on server side
	I1008 14:10:32.150548  362613 main.go:141] libmachine: Successfully made call to close driver server
	I1008 14:10:32.150565  362613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 14:10:32.150583  362613 main.go:141] libmachine: Making call to close driver server
	I1008 14:10:32.150596  362613 main.go:141] libmachine: (addons-527125) Calling .Close
	I1008 14:10:32.150839  362613 main.go:141] libmachine: (addons-527125) DBG | Closing plugin on server side
	I1008 14:10:32.150885  362613 main.go:141] libmachine: Successfully made call to close driver server
	I1008 14:10:32.150895  362613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 14:10:32.416450  362613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.500152221s)
	I1008 14:10:32.416518  362613 main.go:141] libmachine: Making call to close driver server
	I1008 14:10:32.416533  362613 main.go:141] libmachine: (addons-527125) Calling .Close
	I1008 14:10:32.416535  362613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.477172097s)
	I1008 14:10:32.416587  362613 main.go:141] libmachine: Making call to close driver server
	I1008 14:10:32.416610  362613 main.go:141] libmachine: (addons-527125) Calling .Close
	I1008 14:10:32.416835  362613 main.go:141] libmachine: Successfully made call to close driver server
	I1008 14:10:32.416855  362613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 14:10:32.416860  362613 main.go:141] libmachine: (addons-527125) DBG | Closing plugin on server side
	I1008 14:10:32.416865  362613 main.go:141] libmachine: Making call to close driver server
	I1008 14:10:32.416872  362613 main.go:141] libmachine: (addons-527125) Calling .Close
	I1008 14:10:32.416941  362613 main.go:141] libmachine: (addons-527125) DBG | Closing plugin on server side
	I1008 14:10:32.417004  362613 main.go:141] libmachine: Successfully made call to close driver server
	I1008 14:10:32.417018  362613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 14:10:32.417027  362613 main.go:141] libmachine: Making call to close driver server
	I1008 14:10:32.417034  362613 main.go:141] libmachine: (addons-527125) Calling .Close
	I1008 14:10:32.417437  362613 main.go:141] libmachine: (addons-527125) DBG | Closing plugin on server side
	I1008 14:10:32.417444  362613 main.go:141] libmachine: Successfully made call to close driver server
	I1008 14:10:32.417463  362613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 14:10:32.417459  362613 main.go:141] libmachine: (addons-527125) DBG | Closing plugin on server side
	I1008 14:10:32.417481  362613 main.go:141] libmachine: Successfully made call to close driver server
	I1008 14:10:32.417490  362613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 14:10:32.448501  362613 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1008 14:10:32.448586  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHHostname
	I1008 14:10:32.452801  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:32.453435  362613 main.go:141] libmachine: (addons-527125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e6:8d", ip: ""} in network mk-addons-527125: {Iface:virbr1 ExpiryTime:2025-10-08 15:09:57 +0000 UTC Type:0 Mac:52:54:00:74:e6:8d Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-527125 Clientid:01:52:54:00:74:e6:8d}
	I1008 14:10:32.453465  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined IP address 192.168.39.51 and MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:32.453769  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHPort
	I1008 14:10:32.454059  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHKeyPath
	I1008 14:10:32.454272  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHUsername
	I1008 14:10:32.454501  362613 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/addons-527125/id_rsa Username:docker}
	I1008 14:10:32.569969  362613 main.go:141] libmachine: Making call to close driver server
	I1008 14:10:32.569996  362613 main.go:141] libmachine: (addons-527125) Calling .Close
	I1008 14:10:32.570484  362613 main.go:141] libmachine: Successfully made call to close driver server
	I1008 14:10:32.570505  362613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 14:10:32.570528  362613 main.go:141] libmachine: (addons-527125) DBG | Closing plugin on server side
	I1008 14:10:32.772409  362613 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1008 14:10:32.924895  362613 addons.go:238] Setting addon gcp-auth=true in "addons-527125"
	I1008 14:10:32.924959  362613 host.go:66] Checking if "addons-527125" exists ...
	I1008 14:10:32.925268  362613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:10:32.925312  362613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:10:32.940573  362613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37159
	I1008 14:10:32.941066  362613 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:10:32.941698  362613 main.go:141] libmachine: Using API Version  1
	I1008 14:10:32.941727  362613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:10:32.942120  362613 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:10:32.942807  362613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:10:32.942867  362613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:10:32.957830  362613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34291
	I1008 14:10:32.958301  362613 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:10:32.958894  362613 main.go:141] libmachine: Using API Version  1
	I1008 14:10:32.958924  362613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:10:32.959379  362613 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:10:32.959608  362613 main.go:141] libmachine: (addons-527125) Calling .GetState
	I1008 14:10:32.962297  362613 main.go:141] libmachine: (addons-527125) Calling .DriverName
	I1008 14:10:32.962640  362613 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1008 14:10:32.962676  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHHostname
	I1008 14:10:32.966854  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:32.967509  362613 main.go:141] libmachine: (addons-527125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:e6:8d", ip: ""} in network mk-addons-527125: {Iface:virbr1 ExpiryTime:2025-10-08 15:09:57 +0000 UTC Type:0 Mac:52:54:00:74:e6:8d Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:addons-527125 Clientid:01:52:54:00:74:e6:8d}
	I1008 14:10:32.967550  362613 main.go:141] libmachine: (addons-527125) DBG | domain addons-527125 has defined IP address 192.168.39.51 and MAC address 52:54:00:74:e6:8d in network mk-addons-527125
	I1008 14:10:32.967766  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHPort
	I1008 14:10:32.968019  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHKeyPath
	I1008 14:10:32.968223  362613 main.go:141] libmachine: (addons-527125) Calling .GetSSHUsername
	I1008 14:10:32.968419  362613 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/addons-527125/id_rsa Username:docker}
	I1008 14:10:33.974125  362613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.017791324s)
	I1008 14:10:33.974154  362613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.018096785s)
	I1008 14:10:33.974196  362613 main.go:141] libmachine: Making call to close driver server
	I1008 14:10:33.974208  362613 main.go:141] libmachine: (addons-527125) Calling .Close
	I1008 14:10:33.974218  362613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (8.01596896s)
	I1008 14:10:33.974251  362613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (7.975711658s)
	I1008 14:10:33.974264  362613 main.go:141] libmachine: Making call to close driver server
	I1008 14:10:33.974278  362613 main.go:141] libmachine: (addons-527125) Calling .Close
	I1008 14:10:33.974279  362613 main.go:141] libmachine: Making call to close driver server
	I1008 14:10:33.974290  362613 main.go:141] libmachine: (addons-527125) Calling .Close
	I1008 14:10:33.974197  362613 main.go:141] libmachine: Making call to close driver server
	I1008 14:10:33.974339  362613 main.go:141] libmachine: (addons-527125) Calling .Close
	I1008 14:10:33.974381  362613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.401657333s)
	W1008 14:10:33.974415  362613 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:10:33.974436  362613 retry.go:31] will retry after 228.642696ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:10:33.974473  362613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.618242058s)
	I1008 14:10:33.974508  362613 main.go:141] libmachine: Making call to close driver server
	I1008 14:10:33.974521  362613 main.go:141] libmachine: (addons-527125) Calling .Close
	I1008 14:10:33.974581  362613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.321012696s)
	I1008 14:10:33.974534  362613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.569100642s)
	I1008 14:10:33.974603  362613 main.go:141] libmachine: Making call to close driver server
	I1008 14:10:33.974612  362613 main.go:141] libmachine: (addons-527125) Calling .Close
	I1008 14:10:33.974769  362613 main.go:141] libmachine: (addons-527125) DBG | Closing plugin on server side
	I1008 14:10:33.974806  362613 main.go:141] libmachine: Successfully made call to close driver server
	I1008 14:10:33.974808  362613 main.go:141] libmachine: Successfully made call to close driver server
	I1008 14:10:33.974816  362613 main.go:141] libmachine: Making call to close driver server
	I1008 14:10:33.974827  362613 main.go:141] libmachine: (addons-527125) DBG | Closing plugin on server side
	I1008 14:10:33.974844  362613 main.go:141] libmachine: (addons-527125) DBG | Closing plugin on server side
	I1008 14:10:33.974848  362613 main.go:141] libmachine: (addons-527125) Calling .Close
	I1008 14:10:33.974891  362613 main.go:141] libmachine: Successfully made call to close driver server
	I1008 14:10:33.974901  362613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 14:10:33.974906  362613 main.go:141] libmachine: (addons-527125) DBG | Closing plugin on server side
	I1008 14:10:33.974910  362613 main.go:141] libmachine: Making call to close driver server
	I1008 14:10:33.974917  362613 main.go:141] libmachine: (addons-527125) Calling .Close
	I1008 14:10:33.974925  362613 main.go:141] libmachine: Successfully made call to close driver server
	I1008 14:10:33.974931  362613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 14:10:33.974939  362613 main.go:141] libmachine: Making call to close driver server
	I1008 14:10:33.974944  362613 main.go:141] libmachine: (addons-527125) Calling .Close
	I1008 14:10:33.974954  362613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 14:10:33.974970  362613 main.go:141] libmachine: Making call to close driver server
	I1008 14:10:33.974982  362613 main.go:141] libmachine: (addons-527125) Calling .Close
	I1008 14:10:33.974812  362613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 14:10:33.975029  362613 main.go:141] libmachine: (addons-527125) DBG | Closing plugin on server side
	I1008 14:10:33.975033  362613 main.go:141] libmachine: Successfully made call to close driver server
	I1008 14:10:33.975043  362613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 14:10:33.975051  362613 main.go:141] libmachine: Making call to close driver server
	I1008 14:10:33.975058  362613 main.go:141] libmachine: (addons-527125) Calling .Close
	I1008 14:10:33.975035  362613 main.go:141] libmachine: Making call to close driver server
	I1008 14:10:33.975106  362613 main.go:141] libmachine: (addons-527125) Calling .Close
	I1008 14:10:33.975180  362613 main.go:141] libmachine: (addons-527125) DBG | Closing plugin on server side
	I1008 14:10:33.975196  362613 main.go:141] libmachine: Successfully made call to close driver server
	I1008 14:10:33.975201  362613 main.go:141] libmachine: (addons-527125) DBG | Closing plugin on server side
	I1008 14:10:33.975205  362613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 14:10:33.975220  362613 main.go:141] libmachine: Making call to close driver server
	I1008 14:10:33.975224  362613 main.go:141] libmachine: Successfully made call to close driver server
	I1008 14:10:33.975228  362613 main.go:141] libmachine: (addons-527125) Calling .Close
	I1008 14:10:33.975236  362613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 14:10:33.975246  362613 addons.go:479] Verifying addon registry=true in "addons-527125"
	I1008 14:10:33.975395  362613 main.go:141] libmachine: (addons-527125) DBG | Closing plugin on server side
	I1008 14:10:33.975421  362613 main.go:141] libmachine: Successfully made call to close driver server
	I1008 14:10:33.975431  362613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 14:10:33.975525  362613 main.go:141] libmachine: (addons-527125) DBG | Closing plugin on server side
	I1008 14:10:33.975561  362613 main.go:141] libmachine: (addons-527125) DBG | Closing plugin on server side
	I1008 14:10:33.975586  362613 main.go:141] libmachine: Successfully made call to close driver server
	I1008 14:10:33.975592  362613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 14:10:33.975599  362613 main.go:141] libmachine: Making call to close driver server
	I1008 14:10:33.975641  362613 main.go:141] libmachine: (addons-527125) Calling .Close
	I1008 14:10:33.975694  362613 main.go:141] libmachine: Successfully made call to close driver server
	I1008 14:10:33.975700  362613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 14:10:33.977465  362613 main.go:141] libmachine: (addons-527125) DBG | Closing plugin on server side
	I1008 14:10:33.977540  362613 main.go:141] libmachine: Successfully made call to close driver server
	I1008 14:10:33.977560  362613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 14:10:33.977579  362613 addons.go:479] Verifying addon metrics-server=true in "addons-527125"
	I1008 14:10:33.975375  362613 main.go:141] libmachine: Successfully made call to close driver server
	I1008 14:10:33.977815  362613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 14:10:33.977958  362613 main.go:141] libmachine: (addons-527125) DBG | Closing plugin on server side
	I1008 14:10:33.977983  362613 main.go:141] libmachine: (addons-527125) DBG | Closing plugin on server side
	I1008 14:10:33.978007  362613 main.go:141] libmachine: Successfully made call to close driver server
	I1008 14:10:33.978013  362613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 14:10:33.978022  362613 addons.go:479] Verifying addon ingress=true in "addons-527125"
	I1008 14:10:33.978204  362613 main.go:141] libmachine: Successfully made call to close driver server
	I1008 14:10:33.979806  362613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 14:10:33.980674  362613 out.go:179] * Verifying registry addon...
	I1008 14:10:33.980674  362613 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-527125 service yakd-dashboard -n yakd-dashboard
	
	I1008 14:10:33.981585  362613 out.go:179] * Verifying ingress addon...
	I1008 14:10:33.983097  362613 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1008 14:10:33.984074  362613 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1008 14:10:34.102044  362613 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1008 14:10:34.102082  362613 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1008 14:10:34.102081  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:34.102092  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:34.203459  362613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 14:10:34.557295  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:34.557766  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:34.780647  362613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.074434285s)
	W1008 14:10:34.780704  362613 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1008 14:10:34.780716  362613 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.047943919s)
	I1008 14:10:34.780734  362613 retry.go:31] will retry after 262.026783ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1008 14:10:34.780754  362613 api_server.go:72] duration metric: took 9.923604602s to wait for apiserver process to appear ...
	I1008 14:10:34.780763  362613 api_server.go:88] waiting for apiserver healthz status ...
	I1008 14:10:34.780792  362613 api_server.go:253] Checking apiserver healthz at https://192.168.39.51:8443/healthz ...
	I1008 14:10:34.794761  362613 api_server.go:279] https://192.168.39.51:8443/healthz returned 200:
	ok
	I1008 14:10:34.799841  362613 api_server.go:141] control plane version: v1.34.1
	I1008 14:10:34.799871  362613 api_server.go:131] duration metric: took 19.101383ms to wait for apiserver health ...
	I1008 14:10:34.799882  362613 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 14:10:34.838187  362613 system_pods.go:59] 16 kube-system pods found
	I1008 14:10:34.838250  362613 system_pods.go:61] "amd-gpu-device-plugin-6bmcm" [011bcb5e-8b34-4d09-97eb-08d897d3141c] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1008 14:10:34.838264  362613 system_pods.go:61] "coredns-66bc5c9577-lcj7d" [cbb1bba7-6179-4e74-82b7-f2f56b196d3f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 14:10:34.838278  362613 system_pods.go:61] "coredns-66bc5c9577-wpmqj" [65db61e2-0d47-4e9b-b9c2-6ea3a960de4d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 14:10:34.838289  362613 system_pods.go:61] "etcd-addons-527125" [d2f32c7f-4649-466c-880b-81e7127e9135] Running
	I1008 14:10:34.838298  362613 system_pods.go:61] "kube-apiserver-addons-527125" [ebf304dd-76bb-44ec-b5c4-855e63f0584f] Running
	I1008 14:10:34.838304  362613 system_pods.go:61] "kube-controller-manager-addons-527125" [3aa1eb48-bfec-4900-9904-10114be9c20a] Running
	I1008 14:10:34.838312  362613 system_pods.go:61] "kube-ingress-dns-minikube" [84424f19-bc58-4bb1-9395-b41f8b2717e2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1008 14:10:34.838321  362613 system_pods.go:61] "kube-proxy-6vjk6" [6f47cb93-a6a7-4a3c-9b32-2ad93ae128b8] Running
	I1008 14:10:34.838328  362613 system_pods.go:61] "kube-scheduler-addons-527125" [9d9abfeb-1de0-45f4-8d51-de0058b81f3f] Running
	I1008 14:10:34.838339  362613 system_pods.go:61] "metrics-server-85b7d694d7-j2pnk" [c46f59cd-bf89-4732-9756-c81ffea7ce87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 14:10:34.838349  362613 system_pods.go:61] "nvidia-device-plugin-daemonset-2tj86" [1b591317-30f5-433d-a13f-035b3c173ac6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1008 14:10:34.838381  362613 system_pods.go:61] "registry-66898fdd98-lhrp9" [8f7fc2f9-1f90-4337-a178-fdb7f65c2522] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1008 14:10:34.838389  362613 system_pods.go:61] "registry-creds-764b6fb674-r9vvx" [ddc5ade5-700f-488e-b7e2-614bbb15e104] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1008 14:10:34.838398  362613 system_pods.go:61] "registry-proxy-fmht2" [774b61c7-fefe-4387-a25a-6db2c0e46f1a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1008 14:10:34.838404  362613 system_pods.go:61] "snapshot-controller-7d9fbc56b8-zpztd" [4eec2f87-7398-4357-88b1-7197fcdefd7d] Pending
	I1008 14:10:34.838412  362613 system_pods.go:61] "storage-provisioner" [9a2868a1-5ef1-4d01-b26b-49b3ea8370a5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 14:10:34.838424  362613 system_pods.go:74] duration metric: took 38.535871ms to wait for pod list to return data ...
	I1008 14:10:34.838438  362613 default_sa.go:34] waiting for default service account to be created ...
	I1008 14:10:34.883347  362613 default_sa.go:45] found service account: "default"
	I1008 14:10:34.883396  362613 default_sa.go:55] duration metric: took 44.945254ms for default service account to be created ...
	I1008 14:10:34.883409  362613 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 14:10:34.913405  362613 system_pods.go:86] 17 kube-system pods found
	I1008 14:10:34.913444  362613 system_pods.go:89] "amd-gpu-device-plugin-6bmcm" [011bcb5e-8b34-4d09-97eb-08d897d3141c] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1008 14:10:34.913452  362613 system_pods.go:89] "coredns-66bc5c9577-lcj7d" [cbb1bba7-6179-4e74-82b7-f2f56b196d3f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 14:10:34.913460  362613 system_pods.go:89] "coredns-66bc5c9577-wpmqj" [65db61e2-0d47-4e9b-b9c2-6ea3a960de4d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 14:10:34.913465  362613 system_pods.go:89] "etcd-addons-527125" [d2f32c7f-4649-466c-880b-81e7127e9135] Running
	I1008 14:10:34.913469  362613 system_pods.go:89] "kube-apiserver-addons-527125" [ebf304dd-76bb-44ec-b5c4-855e63f0584f] Running
	I1008 14:10:34.913473  362613 system_pods.go:89] "kube-controller-manager-addons-527125" [3aa1eb48-bfec-4900-9904-10114be9c20a] Running
	I1008 14:10:34.913479  362613 system_pods.go:89] "kube-ingress-dns-minikube" [84424f19-bc58-4bb1-9395-b41f8b2717e2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1008 14:10:34.913483  362613 system_pods.go:89] "kube-proxy-6vjk6" [6f47cb93-a6a7-4a3c-9b32-2ad93ae128b8] Running
	I1008 14:10:34.913487  362613 system_pods.go:89] "kube-scheduler-addons-527125" [9d9abfeb-1de0-45f4-8d51-de0058b81f3f] Running
	I1008 14:10:34.913491  362613 system_pods.go:89] "metrics-server-85b7d694d7-j2pnk" [c46f59cd-bf89-4732-9756-c81ffea7ce87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 14:10:34.913499  362613 system_pods.go:89] "nvidia-device-plugin-daemonset-2tj86" [1b591317-30f5-433d-a13f-035b3c173ac6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1008 14:10:34.913504  362613 system_pods.go:89] "registry-66898fdd98-lhrp9" [8f7fc2f9-1f90-4337-a178-fdb7f65c2522] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1008 14:10:34.913509  362613 system_pods.go:89] "registry-creds-764b6fb674-r9vvx" [ddc5ade5-700f-488e-b7e2-614bbb15e104] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1008 14:10:34.913515  362613 system_pods.go:89] "registry-proxy-fmht2" [774b61c7-fefe-4387-a25a-6db2c0e46f1a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1008 14:10:34.913518  362613 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jt65d" [46807e12-865b-42a6-8f14-c6965786c0ca] Pending
	I1008 14:10:34.913523  362613 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zpztd" [4eec2f87-7398-4357-88b1-7197fcdefd7d] Pending
	I1008 14:10:34.913527  362613 system_pods.go:89] "storage-provisioner" [9a2868a1-5ef1-4d01-b26b-49b3ea8370a5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 14:10:34.913535  362613 system_pods.go:126] duration metric: took 30.119755ms to wait for k8s-apps to be running ...
	I1008 14:10:34.913543  362613 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 14:10:34.913596  362613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 14:10:35.005164  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:35.005190  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:35.044002  362613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1008 14:10:35.563943  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:35.566161  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:35.637832  362613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.826510221s)
	I1008 14:10:35.637868  362613 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.6752055s)
	I1008 14:10:35.637892  362613 main.go:141] libmachine: Making call to close driver server
	I1008 14:10:35.637908  362613 main.go:141] libmachine: (addons-527125) Calling .Close
	I1008 14:10:35.638259  362613 main.go:141] libmachine: Successfully made call to close driver server
	I1008 14:10:35.638281  362613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 14:10:35.638292  362613 main.go:141] libmachine: Making call to close driver server
	I1008 14:10:35.638300  362613 main.go:141] libmachine: (addons-527125) Calling .Close
	I1008 14:10:35.638640  362613 main.go:141] libmachine: (addons-527125) DBG | Closing plugin on server side
	I1008 14:10:35.638703  362613 main.go:141] libmachine: Successfully made call to close driver server
	I1008 14:10:35.638739  362613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 14:10:35.638752  362613 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-527125"
	I1008 14:10:35.639551  362613 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1008 14:10:35.640512  362613 out.go:179] * Verifying csi-hostpath-driver addon...
	I1008 14:10:35.642071  362613 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1008 14:10:35.642853  362613 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1008 14:10:35.643445  362613 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1008 14:10:35.643482  362613 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1008 14:10:35.689455  362613 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1008 14:10:35.689485  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:35.889904  362613 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1008 14:10:35.889942  362613 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1008 14:10:35.991570  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:35.992928  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:36.055084  362613 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1008 14:10:36.055109  362613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1008 14:10:36.129239  362613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1008 14:10:36.148271  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:36.493737  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:36.495861  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:36.658420  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:36.997759  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:36.998691  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:37.153021  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:37.494444  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:37.494539  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:37.657751  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:37.730915  362613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.527410851s)
	W1008 14:10:37.730966  362613 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:10:37.730976  362613 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.817353069s)
	I1008 14:10:37.730997  362613 retry.go:31] will retry after 279.418267ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:10:37.731014  362613 system_svc.go:56] duration metric: took 2.817465371s WaitForService to wait for kubelet
	I1008 14:10:37.731038  362613 kubeadm.go:586] duration metric: took 12.873881574s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 14:10:37.731071  362613 node_conditions.go:102] verifying NodePressure condition ...
	I1008 14:10:37.751726  362613 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 14:10:37.751775  362613 node_conditions.go:123] node cpu capacity is 2
	I1008 14:10:37.751795  362613 node_conditions.go:105] duration metric: took 20.717564ms to run NodePressure ...
	I1008 14:10:37.751812  362613 start.go:241] waiting for startup goroutines ...
	I1008 14:10:37.812902  362613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.76883331s)
	I1008 14:10:37.812986  362613 main.go:141] libmachine: Making call to close driver server
	I1008 14:10:37.813001  362613 main.go:141] libmachine: (addons-527125) Calling .Close
	I1008 14:10:37.813345  362613 main.go:141] libmachine: Successfully made call to close driver server
	I1008 14:10:37.813377  362613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 14:10:37.813387  362613 main.go:141] libmachine: Making call to close driver server
	I1008 14:10:37.813395  362613 main.go:141] libmachine: (addons-527125) Calling .Close
	I1008 14:10:37.813676  362613 main.go:141] libmachine: Successfully made call to close driver server
	I1008 14:10:37.813700  362613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 14:10:37.813714  362613 main.go:141] libmachine: (addons-527125) DBG | Closing plugin on server side
	I1008 14:10:37.899550  362613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.770266195s)
	I1008 14:10:37.899641  362613 main.go:141] libmachine: Making call to close driver server
	I1008 14:10:37.899661  362613 main.go:141] libmachine: (addons-527125) Calling .Close
	I1008 14:10:37.900003  362613 main.go:141] libmachine: Successfully made call to close driver server
	I1008 14:10:37.900026  362613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 14:10:37.900035  362613 main.go:141] libmachine: Making call to close driver server
	I1008 14:10:37.900042  362613 main.go:141] libmachine: (addons-527125) Calling .Close
	I1008 14:10:37.900054  362613 main.go:141] libmachine: (addons-527125) DBG | Closing plugin on server side
	I1008 14:10:37.900279  362613 main.go:141] libmachine: Successfully made call to close driver server
	I1008 14:10:37.900295  362613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 14:10:37.900308  362613 main.go:141] libmachine: (addons-527125) DBG | Closing plugin on server side
	I1008 14:10:37.901698  362613 addons.go:479] Verifying addon gcp-auth=true in "addons-527125"
	I1008 14:10:37.904772  362613 out.go:179] * Verifying gcp-auth addon...
	I1008 14:10:37.907027  362613 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1008 14:10:37.988494  362613 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1008 14:10:37.988526  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:38.011825  362613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 14:10:38.038964  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:38.040242  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:38.148492  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:38.412235  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:38.487492  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:38.490672  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:38.650950  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:38.910629  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:38.991708  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:38.993543  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:39.150003  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:39.415019  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:39.429505  362613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.417631715s)
	W1008 14:10:39.429556  362613 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:10:39.429612  362613 retry.go:31] will retry after 626.230101ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:10:39.488922  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:39.490549  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:39.652140  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:39.914275  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:40.015809  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:40.016461  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:40.056689  362613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 14:10:40.148273  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:40.410268  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:40.489170  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:40.490488  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:40.648787  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:40.913414  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:40.989671  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:40.989897  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:41.147480  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:41.267503  362613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.21073057s)
	W1008 14:10:41.267561  362613 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:10:41.267586  362613 retry.go:31] will retry after 697.547579ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:10:41.411631  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:41.487575  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:41.492658  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:41.649001  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:41.912535  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:41.965661  362613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 14:10:41.998200  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:41.998518  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:42.149715  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:42.412350  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:42.490493  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:42.490565  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:42.651268  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:42.915047  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:42.996925  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:42.998595  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:43.139659  362613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.173934852s)
	W1008 14:10:43.139701  362613 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:10:43.139724  362613 retry.go:31] will retry after 1.498170122s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:10:43.150431  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:43.412488  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:43.493163  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:43.494694  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:43.650039  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:43.912972  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:43.989722  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:43.989884  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:44.151885  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:44.412780  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:44.491084  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:44.491473  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:44.638695  362613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 14:10:44.646748  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:44.912684  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:44.990194  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:44.992224  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:45.157452  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:45.417725  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:45.490323  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:45.492958  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1008 14:10:45.606782  362613 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:10:45.606821  362613 retry.go:31] will retry after 2.636387245s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:10:45.649114  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:46.057418  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:46.071347  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:46.071642  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:46.157989  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:46.411707  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:46.490050  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:46.490318  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:46.650597  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:46.911166  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:46.989493  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:46.991104  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:47.153018  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:47.410153  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:47.489525  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:47.492051  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:47.646505  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:47.916422  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:47.992816  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:48.000528  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:48.149772  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:48.243593  362613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 14:10:48.415375  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:48.489470  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:48.490913  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:48.648813  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:48.913459  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:48.988170  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:48.990574  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:49.239252  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:49.411826  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:49.477214  362613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.233572741s)
	W1008 14:10:49.477271  362613 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:10:49.477297  362613 retry.go:31] will retry after 2.604134926s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:10:49.487071  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:49.490939  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:49.690009  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:49.910914  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:49.986645  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:49.989513  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:50.224559  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:50.775049  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:50.775189  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:50.775385  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:50.775971  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:50.912218  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:50.988688  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:50.988974  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:51.146225  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:51.413154  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:51.488444  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:51.488455  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:51.647901  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:51.911985  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:51.987705  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:51.987791  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:52.082091  362613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 14:10:52.154856  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:52.502477  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:52.502712  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:52.502730  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:52.650623  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:52.911132  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:52.988581  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:52.990280  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:53.084184  362613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.002042396s)
	W1008 14:10:53.084245  362613 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:10:53.084276  362613 retry.go:31] will retry after 3.995318243s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:10:53.151206  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:53.410886  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:53.487860  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:53.489094  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:53.646973  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:53.911066  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:53.987606  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:53.987938  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:54.151723  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:54.412015  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:54.487521  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:54.487637  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:54.648148  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:54.911593  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:54.988861  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:54.989017  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:55.148212  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:55.413981  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:55.488980  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:55.491158  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:55.650556  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:55.912228  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:56.013244  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:56.014125  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:56.147962  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:56.411828  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:56.488076  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:56.488463  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:56.649187  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:56.910620  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:56.987612  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:56.988188  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:57.080427  362613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 14:10:57.147226  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:57.411213  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:57.487754  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:57.487908  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:57.648991  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1008 14:10:57.827177  362613 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:10:57.827213  362613 retry.go:31] will retry after 8.129662816s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:10:57.911993  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:57.988385  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:57.989240  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:58.147645  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:58.414518  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:58.488617  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:58.491243  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:58.651293  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:58.910565  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:58.993319  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:59.000988  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:59.148350  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:59.411299  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:59.487904  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:59.488861  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:10:59.648633  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:10:59.912204  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:10:59.989365  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:10:59.990233  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:00.146707  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:00.414100  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:00.491860  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:00.492054  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:00.648128  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:00.912920  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:00.988281  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:00.988389  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:01.148568  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:01.412744  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:01.486754  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:01.489635  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:01.648101  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:01.910966  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:01.990845  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:01.991784  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:02.147504  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:02.411510  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:02.488405  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:02.488588  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:02.650248  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:02.912525  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:03.012553  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:03.012629  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:03.147767  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:03.412301  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:03.486503  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:03.487893  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:03.646523  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:03.911033  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:03.987301  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:03.988279  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:04.147498  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:04.411192  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:04.486390  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:04.488081  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:04.647667  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:04.911225  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:04.991339  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:04.991400  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:05.155046  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:05.410480  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:05.487662  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:05.490089  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:05.649849  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:05.912009  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:05.957756  362613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 14:11:05.987163  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:06.002448  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:06.148630  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:06.411641  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:06.486736  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:06.490381  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:06.649705  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:06.913322  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:06.988075  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:06.988137  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:07.005591  362613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.04777683s)
	W1008 14:11:07.005660  362613 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:11:07.005690  362613 retry.go:31] will retry after 8.296929499s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:11:07.147326  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:07.410926  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:07.488156  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:07.488773  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:07.648599  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:07.910957  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:07.987470  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:07.987687  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:08.170278  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:08.416466  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:08.490209  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:08.491074  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:08.803924  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:08.915063  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:08.988233  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:08.990187  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:09.150792  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:09.413387  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:09.488581  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:09.488856  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:09.649369  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:09.940024  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:09.989298  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:09.992953  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:10.148433  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:10.411086  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:10.487734  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:10.488302  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:10.651159  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:10.913123  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:10.992568  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:10.995679  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:11.160499  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:11.644445  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:11.644492  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:11.644606  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:11.646880  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:11.911055  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:11.989770  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:11.992415  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:12.148691  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:12.415468  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:12.490957  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:12.491280  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:12.647499  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:12.913474  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:13.018512  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:13.018538  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:13.147407  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:13.410810  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:13.493576  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:13.497038  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:13.647595  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:13.911557  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:13.988139  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:13.989289  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:14.146625  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:14.411328  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:14.488499  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:14.489162  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:14.647759  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:14.980631  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:14.987463  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:14.988380  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:15.148745  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:15.302790  362613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 14:11:15.411847  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:15.490407  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:15.493151  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:15.653796  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:15.913228  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:15.992799  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:15.992842  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:16.150002  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1008 14:11:16.207991  362613 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:11:16.208030  362613 retry.go:31] will retry after 12.190154252s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:11:16.410010  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:16.488619  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:16.490427  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:16.648342  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:16.913287  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:16.987588  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:16.990392  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:17.149278  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:17.410942  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:17.488598  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:17.491714  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:17.647977  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:17.911150  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:17.986223  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:17.993170  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:18.149081  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:18.410541  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:18.487224  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:18.489147  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:18.648135  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:18.912325  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:19.011932  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:19.013266  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:19.147623  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:19.411279  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:19.486410  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:19.487990  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:19.647007  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:19.911750  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:19.987177  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:19.988024  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:20.146902  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:20.411211  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:20.486435  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:20.487786  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:20.648581  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:20.912471  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:20.987499  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:20.988509  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:21.150678  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:21.412525  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:21.491252  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:21.491438  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:21.649712  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:21.913070  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:21.988611  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:22.003606  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:22.149122  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:22.413402  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:22.489395  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:22.493981  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:22.646970  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:22.912811  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:23.014586  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:23.014611  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:23.148450  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:23.411118  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:23.487425  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 14:11:23.487910  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:23.646798  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:23.911765  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:23.987116  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:23.987452  362613 kapi.go:107] duration metric: took 50.004352031s to wait for kubernetes.io/minikube-addons=registry ...
	I1008 14:11:24.147225  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:24.411064  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:24.489097  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:24.646891  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:24.914112  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:24.989807  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:25.148175  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:25.411477  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:25.489714  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:25.647147  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:25.911414  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:25.989216  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:26.146901  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:26.411765  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:26.487923  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:26.646300  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:26.911154  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:26.987749  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:27.148414  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:27.410983  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:27.488499  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:27.649277  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:27.912566  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:27.990426  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:28.148967  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:28.399227  362613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 14:11:28.411283  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:28.489335  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:28.651284  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:28.913916  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:29.238822  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:29.238944  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:29.412922  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:29.490451  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:29.650732  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:29.863756  362613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.464472694s)
	W1008 14:11:29.863808  362613 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:11:29.863842  362613 retry.go:31] will retry after 25.647499292s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:11:29.910052  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:29.990052  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:30.149256  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:30.410283  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:30.487764  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:30.657668  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:30.913370  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:30.988173  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:31.146720  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:31.413075  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:31.512529  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:31.647939  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:31.912067  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:31.988167  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:32.147136  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:32.410327  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:32.488492  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:32.648169  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:32.911004  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:32.989850  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:33.149647  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:33.412694  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:33.490281  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:33.649346  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:33.911200  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:33.989512  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:34.151171  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:34.412785  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:34.488672  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:34.647453  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:34.910423  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:34.989902  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:35.146954  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:35.413823  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:35.488792  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:35.647669  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:35.910942  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:35.990280  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:36.146840  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:36.411751  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:36.488802  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:36.647309  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:36.911284  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:36.988149  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:37.146781  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:37.413317  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:37.488161  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:37.647201  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:37.911284  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:37.988952  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:38.147651  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:38.411299  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:38.488173  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:38.647101  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:38.910613  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:38.988319  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:39.147425  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:39.411665  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:39.489243  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:39.647738  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:39.912171  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:39.988965  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:40.146436  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:40.412274  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:40.488453  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:40.647013  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:40.910883  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:40.988771  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:41.148165  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:41.411797  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:41.672015  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:41.672173  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:41.912417  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:41.988154  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:42.146673  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:42.412055  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:42.488680  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:42.647217  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:42.910843  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:42.988939  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:43.146808  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:43.411030  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:43.489006  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:43.647295  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:43.911231  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:43.988496  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:44.148070  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:44.413532  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:44.491169  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:44.648889  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:44.912977  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:44.987586  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:45.146384  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:45.412393  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:45.488985  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:45.646550  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:45.915065  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:45.989623  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:46.146948  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:46.411383  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:46.488206  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:46.648402  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:46.911696  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:46.988779  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:47.147495  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:47.410779  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:47.488270  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:47.646769  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:47.911343  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:47.987863  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:48.147608  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:48.412061  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:48.488943  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:48.646440  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:48.910582  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:48.987878  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:49.152245  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:49.410800  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:49.488605  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:49.647411  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:49.911710  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:49.988620  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:50.146804  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:50.411977  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:50.489009  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:50.646990  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:50.910769  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:50.987783  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:51.147601  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:51.411142  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:51.488966  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:51.647541  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:51.911303  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:51.988287  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:52.147722  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:52.410742  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:52.488191  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:52.647525  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:52.910652  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:52.988663  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:53.147626  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:53.410953  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:53.488602  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:53.647180  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:53.910998  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:53.988616  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:54.148069  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:54.411271  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:54.487823  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:54.647960  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:54.910393  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:54.989395  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:55.147334  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:55.410801  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:55.488643  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:55.511619  362613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 14:11:55.647218  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:55.911406  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:55.990233  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:56.147526  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1008 14:11:56.219560  362613 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 14:11:56.219645  362613 main.go:141] libmachine: Making call to close driver server
	I1008 14:11:56.219658  362613 main.go:141] libmachine: (addons-527125) Calling .Close
	I1008 14:11:56.219987  362613 main.go:141] libmachine: Successfully made call to close driver server
	I1008 14:11:56.220008  362613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 14:11:56.220016  362613 main.go:141] libmachine: Making call to close driver server
	I1008 14:11:56.220024  362613 main.go:141] libmachine: (addons-527125) Calling .Close
	I1008 14:11:56.220055  362613 main.go:141] libmachine: (addons-527125) DBG | Closing plugin on server side
	I1008 14:11:56.220275  362613 main.go:141] libmachine: Successfully made call to close driver server
	I1008 14:11:56.220290  362613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 14:11:56.220307  362613 main.go:141] libmachine: (addons-527125) DBG | Closing plugin on server side
	W1008 14:11:56.220506  362613 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1008 14:11:56.410722  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:56.488371  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:56.648867  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:56.911336  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:56.988540  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:57.147386  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:57.411639  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:57.488411  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:57.647131  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:57.911011  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:57.989047  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:58.146462  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:58.410713  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:58.488814  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:58.647310  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:58.911285  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:58.987718  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:59.147582  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:59.411768  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:59.488775  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:11:59.647285  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:11:59.912215  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:11:59.989661  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:00.146974  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:00.410492  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:00.488416  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:00.647440  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:00.910658  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:00.987904  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:01.146415  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:01.410892  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:01.488185  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:01.646983  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:01.911394  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:01.988524  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:02.146969  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:02.411875  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:02.488252  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:02.646782  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:02.912237  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:02.988104  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:03.146345  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:03.410957  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:03.489105  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:03.647149  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:03.910557  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:03.988496  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:04.147114  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:04.410277  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:04.488104  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:04.646916  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:04.911468  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:04.988792  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:05.147295  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:05.410246  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:05.487722  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:05.647452  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:05.911708  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:05.988219  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:06.147446  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:06.411135  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:06.489095  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:06.647208  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:06.911113  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:06.988472  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:07.147399  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:07.411310  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:07.487594  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:07.647497  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:07.911286  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:07.988188  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:08.147298  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:08.410878  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:08.488194  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:08.646948  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:08.911455  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:08.988691  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:09.148104  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:09.410229  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:09.488131  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:09.647460  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:09.911478  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:09.988112  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:10.147401  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:10.410517  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:10.487825  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:10.648716  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:10.911221  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:10.989205  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:11.148041  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:11.411266  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:11.487484  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:11.647869  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:11.911879  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:12.012505  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:12.147907  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:12.411371  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:12.487754  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:12.648314  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:12.914698  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:12.989187  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:13.146115  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:13.412003  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:13.490607  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:13.648041  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:13.911791  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:13.988457  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:14.151586  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:14.411330  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:14.487911  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:14.647775  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:14.913473  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:14.988732  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:15.150472  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:15.414111  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:15.515986  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:15.648636  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:15.920739  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:16.020733  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:16.147886  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:16.411633  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:16.489124  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:16.654959  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:16.912881  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:16.989009  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:17.147527  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:17.412868  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:17.491945  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:17.646779  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:18.069734  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:18.075092  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:18.173206  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:18.410953  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:18.488527  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:18.649532  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:18.912318  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:18.990037  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:19.151071  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:19.412025  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:19.491771  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:19.650840  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:19.912173  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:20.013769  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:20.148462  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:20.412558  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:20.489373  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:20.648902  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:20.912583  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:20.987612  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:21.153583  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:21.442145  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:21.540092  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:21.646992  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:21.911680  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:21.999403  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:22.314320  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:22.410990  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:22.495595  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:22.647344  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:22.911075  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:23.013047  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:23.147080  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:23.413173  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:23.492332  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:23.647824  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:23.911076  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:23.989139  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:24.146753  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:24.411771  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:24.494263  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:24.649228  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:24.910617  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:24.987695  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:25.146865  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:25.411105  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:25.488989  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:25.647455  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:25.911147  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:25.988267  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:26.146946  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:26.411304  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:26.487869  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:26.648273  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:26.910285  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:26.990201  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:27.149302  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:27.411437  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:27.488106  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:27.647752  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:27.911382  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:27.987924  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:28.146653  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:28.417071  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:28.516128  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:28.646182  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:28.911373  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:28.987751  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:29.148282  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:29.410745  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:29.488332  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:29.647451  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:29.911132  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:29.988257  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:30.146592  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:30.419303  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:30.488735  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:30.647787  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:30.911944  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:30.987715  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:31.147092  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:31.410548  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:31.488045  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:31.646112  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:31.910776  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:31.988424  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:32.147633  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:32.411481  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:32.488338  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:32.647306  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:32.910563  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:32.988117  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:33.147290  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:33.411091  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:33.511831  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:33.647795  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:33.911159  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:33.991954  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:34.147696  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:34.411458  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:34.488161  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:34.646424  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:34.914148  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:34.988074  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:35.149504  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:35.413383  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:35.488311  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:35.651567  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:35.912215  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:35.988301  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:36.148889  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:36.412841  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:36.513122  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:36.651629  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:36.921984  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:36.993121  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:37.147516  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 14:12:37.411466  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:37.491021  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:37.649131  362613 kapi.go:107] duration metric: took 2m2.006272064s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1008 14:12:37.910306  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:37.987623  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:38.411047  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:38.512793  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:38.911307  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:38.988153  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:39.411062  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:39.488635  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:39.910855  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:39.988079  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:40.412005  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:40.512961  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:40.911306  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:40.988809  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:41.411758  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:41.488449  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:41.910930  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:41.988990  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:42.410674  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:42.488988  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:42.911311  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:42.987964  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:43.410467  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:43.728570  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:43.911164  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:43.987581  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:44.411030  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:44.489702  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:44.910753  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:44.987884  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:45.410956  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:45.488327  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:45.910986  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:45.988035  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:46.410967  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:46.488757  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:46.910191  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:46.990456  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:47.411609  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:47.489140  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:47.910904  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:47.989519  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:48.411918  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:48.512942  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:48.911373  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:48.992771  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:49.416466  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:49.488966  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:49.925827  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:49.995553  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:50.410036  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:50.495998  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:50.912438  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:50.989277  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:51.411827  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:51.488918  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:51.912834  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:51.990462  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:52.411463  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:52.493960  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:52.917470  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:52.988330  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:53.411290  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:53.490705  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:53.912024  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:53.991206  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:54.412820  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:54.489548  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:54.912798  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:54.987686  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:55.415188  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:55.491984  362613 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 14:12:55.915912  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:55.989077  362613 kapi.go:107] duration metric: took 2m22.00499685s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1008 14:12:56.411536  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:56.911269  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:57.412093  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:57.912692  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:58.413197  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:58.913934  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:59.412584  362613 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 14:12:59.912554  362613 kapi.go:107] duration metric: took 2m22.005518268s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1008 14:12:59.915040  362613 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-527125 cluster.
	I1008 14:12:59.916786  362613 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1008 14:12:59.918251  362613 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1008 14:12:59.920007  362613 out.go:179] * Enabled addons: ingress-dns, default-storageclass, storage-provisioner, cloud-spanner, storage-provisioner-rancher, registry-creds, metrics-server, nvidia-device-plugin, amd-gpu-device-plugin, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1008 14:12:59.921467  362613 addons.go:514] duration metric: took 2m35.064289212s for enable addons: enabled=[ingress-dns default-storageclass storage-provisioner cloud-spanner storage-provisioner-rancher registry-creds metrics-server nvidia-device-plugin amd-gpu-device-plugin yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1008 14:12:59.921534  362613 start.go:246] waiting for cluster config update ...
	I1008 14:12:59.921557  362613 start.go:255] writing updated cluster config ...
	I1008 14:12:59.921884  362613 ssh_runner.go:195] Run: rm -f paused
	I1008 14:12:59.936627  362613 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 14:12:59.941564  362613 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wpmqj" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 14:12:59.950904  362613 pod_ready.go:94] pod "coredns-66bc5c9577-wpmqj" is "Ready"
	I1008 14:12:59.950948  362613 pod_ready.go:86] duration metric: took 9.342932ms for pod "coredns-66bc5c9577-wpmqj" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 14:12:59.953623  362613 pod_ready.go:83] waiting for pod "etcd-addons-527125" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 14:12:59.963243  362613 pod_ready.go:94] pod "etcd-addons-527125" is "Ready"
	I1008 14:12:59.963281  362613 pod_ready.go:86] duration metric: took 9.622658ms for pod "etcd-addons-527125" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 14:12:59.966437  362613 pod_ready.go:83] waiting for pod "kube-apiserver-addons-527125" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 14:12:59.975641  362613 pod_ready.go:94] pod "kube-apiserver-addons-527125" is "Ready"
	I1008 14:12:59.975679  362613 pod_ready.go:86] duration metric: took 9.201937ms for pod "kube-apiserver-addons-527125" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 14:12:59.979048  362613 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-527125" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 14:13:00.340803  362613 pod_ready.go:94] pod "kube-controller-manager-addons-527125" is "Ready"
	I1008 14:13:00.340841  362613 pod_ready.go:86] duration metric: took 361.753837ms for pod "kube-controller-manager-addons-527125" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 14:13:00.541328  362613 pod_ready.go:83] waiting for pod "kube-proxy-6vjk6" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 14:13:00.941498  362613 pod_ready.go:94] pod "kube-proxy-6vjk6" is "Ready"
	I1008 14:13:00.941541  362613 pod_ready.go:86] duration metric: took 400.162702ms for pod "kube-proxy-6vjk6" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 14:13:01.141789  362613 pod_ready.go:83] waiting for pod "kube-scheduler-addons-527125" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 14:13:01.541397  362613 pod_ready.go:94] pod "kube-scheduler-addons-527125" is "Ready"
	I1008 14:13:01.541430  362613 pod_ready.go:86] duration metric: took 399.609492ms for pod "kube-scheduler-addons-527125" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 14:13:01.541448  362613 pod_ready.go:40] duration metric: took 1.604762596s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 14:13:01.589649  362613 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1008 14:13:01.591618  362613 out.go:179] * Done! kubectl is now configured to use "addons-527125" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 08 14:16:13 addons-527125 crio[815]: time="2025-10-08 14:16:13.439788827Z" level=debug msg="Received container exit code: 0, message: " file="oci/runtime_oci.go:670" id=ea653969-b841-4efc-91f3-065dd814ace6 name=/runtime.v1.RuntimeService/ExecSync
	Oct 08 14:16:13 addons-527125 crio[815]: time="2025-10-08 14:16:13.440061701Z" level=debug msg="Response: &ExecSyncResponse{Stdout:[FILTERED],Stderr:[],ExitCode:0,}" file="otel-collector/interceptors.go:74" id=ea653969-b841-4efc-91f3-065dd814ace6 name=/runtime.v1.RuntimeService/ExecSync
	Oct 08 14:16:13 addons-527125 crio[815]: time="2025-10-08 14:16:13.445154021Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ee0c1487-802b-4775-bc6f-36f1b501927c name=/runtime.v1.RuntimeService/Version
	Oct 08 14:16:13 addons-527125 crio[815]: time="2025-10-08 14:16:13.445311823Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ee0c1487-802b-4775-bc6f-36f1b501927c name=/runtime.v1.RuntimeService/Version
	Oct 08 14:16:13 addons-527125 crio[815]: time="2025-10-08 14:16:13.447337333Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d86caae8-8e3f-4707-84fe-0aa33ead8588 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 14:16:13 addons-527125 crio[815]: time="2025-10-08 14:16:13.448926507Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759932973448894712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598010,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d86caae8-8e3f-4707-84fe-0aa33ead8588 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 14:16:13 addons-527125 crio[815]: time="2025-10-08 14:16:13.449491735Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=14bf7740-d8a9-4446-b070-b8cc60e8679d name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 14:16:13 addons-527125 crio[815]: time="2025-10-08 14:16:13.449549999Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=14bf7740-d8a9-4446-b070-b8cc60e8679d name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 14:16:13 addons-527125 crio[815]: time="2025-10-08 14:16:13.450198336Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd4f0d75529eefa44460740ea3170f40fd916bb7b99ec24d1aca32585db3715c,PodSandboxId:c8a12936349419f185fd6ff5dcf4d5c54286a6a646928c7f1d81067e87ae2c11,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:5105ff1d2f8c81a83c335cee9bf466f452e33d7ea58ef0e7065143fb761485ab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a63019652e24443f18e4806cae975591a737588479e88047f8c4e11991819d24,State:CONTAINER_RUNNING,CreatedAt:1759932829073431144,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a70ffabf-82a6-43e5-bbb6-0693c530d883,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84f1b5de273397b00ce40ecaef5d3248b4a58063583cffcb8f3d932bcec1b876,PodSandboxId:d74811a5186bf5d1991fa19e9333a06094d44dedf480404929eaaea7bb829664,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759932785790395306,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bf129346-930b-4f40-8ca4-15fa8630b971,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d151af1118d34737babf23a76fb041f10fb9b6bf1014480770affe28ca5f4bc7,PodSandboxId:2b329c582b203e75c2d625790c71e1621d9a2339278935af0c8d99a78a4d03ff,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759932775508993306,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-lwpdn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1fe3cbed-9633-4590-b9bb-e14a6199fbe8,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a1feba9d92fba4ef5b9cb7d70c79729766b95ee46855fd914f36fce36c906d5c,PodSandboxId:c995193ce75786afcf45df2bc874d60287c620432cf897d8edb2ab251fc0012b,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Sta
te:CONTAINER_EXITED,CreatedAt:1759932735344683929,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-g8ksq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4ec79cba-28ef-4b09-9b1c-c8037cce0dc3,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f9c986a54eb286cbfa1fcd4f234637ef2d6feee9f464f4b5c33bd459cc7615,PodSandboxId:715d958f1f58e033e472f40a854204e2833fe8ab4fe85b1a06ac5111fdf85dff,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d1697145
49c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759932734252051233,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-hcnc5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 781fdc78-4f0f-48da-b68d-3bf20f58b742,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46afc08045bce132666d2976946c5b39a15a8511d703ed37bfedeb60da091c68,PodSandboxId:fc4e2329fa74adbc738d87e56c5e2aaea7deb76d7bd11edc8650457f3277f80e,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38d
ca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1759932704729563633,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-fcgl2,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 73a321d7-cbc8-4559-a441-943c51fd3a20,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:644899e4eb1adf1310ac809aab5d3f50b398e72760dc36049dc8000d5dee1641,PodSandboxId:9710060fb1447c3a63f86fd46de8f7717a92ffba77826abbead473d9cca0f229,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759932673004477820,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84424f19-bc58-4bb1-9395-b41f8b2717e2,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96d4661330a8f5d463a7cee3be0e0f215d3bbe09ac8a0ec4ba97c588080da6b1,PodSandboxId:a728e7291218b955b1bdb7eea6f5e23cead1d4b3d75c4d794ee3b94dc6f501f9,Metadata:&ContainerMetadata{Name:amd-gpu
-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759932655908888046,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-6bmcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 011bcb5e-8b34-4d09-97eb-08d897d3141c,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae565ceabd6bef3b988b6d7a96cff339f5450fef1dda10f994aaddcc645af61d,PodSandboxId:ab46bf4b0bd815972ec0ae1da9237577bc3dfaa4172fb9f4128a51cfaeeaa32c,
Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759932634509820587,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a2868a1-5ef1-4d01-b26b-49b3ea8370a5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74f348635217b15b500c4a0ba5dfffb579df296c95a11b45b022e064c8debb71,PodSandboxId:d37558af1ea172cc5f53490d996548021769c973f8cd8d5f11d0e5c12f069190,Metadata:&Co
ntainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759932626793562633,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wpmqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65db61e2-0d47-4e9b-b9c2-6ea3a960de4d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ae8fc094fe7ab2be46cf484ffb9a767595d43ea2c71a48edc42de4cfd54b9c,PodSandboxId:d209b5508c2d4af570b4ce7322dc173e1c7d17f55c958d9fba9233d036600b3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759932626002123639,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6vjk6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f47cb93-a6a7-4a3c-9b32-2ad93ae128b8,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0673a329764cc44b3a910bc0ca0906db8f1c61f6d55cc76a1b9a74731e3d56be,PodSandboxId:bb1a9601e4b2c51574ab1702d68cc3150ff1f690b02b1843719317f390f2fd86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759932614002076305,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-527125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 801b365fccc8357a4badcc59de4a88db,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:373f68e0a309227a362c7738856634f04053dcbe5fc1a298f0980ceb71395867,PodSandboxId:59787c519bc49526d7a0cbd139173890e5fff73cd6f4de205973de6394633a21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759932613976046127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-527125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bc858eb29d9f81e70db16cbbe160731,},Annotations:map[string]st
ring{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01cccffee35cb20c4304160ba615287376fcd845cef44ad63c7b434bbbbf5f22,PodSandboxId:cf5aa74c2f4cc2649808e936344eee466a04a7ba3cbe29bdd90e411b2ade9354,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759932613984051809,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-527125,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: a38f6d3c859f7c25db05cab0ffc1d466,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12b9392e083be3060d0a95cc2607a3b8232190a724207ec624af25946d1a24b6,PodSandboxId:e65813c832b81eb443fbffd2df449b014ea9b2470a0a7a20483ea054b5ac72c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759932613923217074,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler
,io.kubernetes.pod.name: kube-scheduler-addons-527125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32a7bd3683ae37095704254da0d710b2,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=14bf7740-d8a9-4446-b070-b8cc60e8679d name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 14:16:13 addons-527125 crio[815]: time="2025-10-08 14:16:13.450562958Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.list.v2+json\"" file="docker/docker_client.go:964"
	Oct 08 14:16:13 addons-527125 crio[815]: time="2025-10-08 14:16:13.451395720Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" file="docker/docker_client.go:631"
	Oct 08 14:16:13 addons-527125 crio[815]: time="2025-10-08 14:16:13.488048737Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c3468e26-061c-4854-87f2-5dbc42484923 name=/runtime.v1.RuntimeService/Version
	Oct 08 14:16:13 addons-527125 crio[815]: time="2025-10-08 14:16:13.488161408Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c3468e26-061c-4854-87f2-5dbc42484923 name=/runtime.v1.RuntimeService/Version
	Oct 08 14:16:13 addons-527125 crio[815]: time="2025-10-08 14:16:13.489777596Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=88d0220a-04e8-4396-8805-78d87af2fad0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 14:16:13 addons-527125 crio[815]: time="2025-10-08 14:16:13.492902650Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759932973492865871,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598010,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=88d0220a-04e8-4396-8805-78d87af2fad0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 14:16:13 addons-527125 crio[815]: time="2025-10-08 14:16:13.493540318Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3b2ae112-b7fe-433a-b716-b12873843949 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 14:16:13 addons-527125 crio[815]: time="2025-10-08 14:16:13.493623719Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3b2ae112-b7fe-433a-b716-b12873843949 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 14:16:13 addons-527125 crio[815]: time="2025-10-08 14:16:13.493984052Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd4f0d75529eefa44460740ea3170f40fd916bb7b99ec24d1aca32585db3715c,PodSandboxId:c8a12936349419f185fd6ff5dcf4d5c54286a6a646928c7f1d81067e87ae2c11,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:5105ff1d2f8c81a83c335cee9bf466f452e33d7ea58ef0e7065143fb761485ab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a63019652e24443f18e4806cae975591a737588479e88047f8c4e11991819d24,State:CONTAINER_RUNNING,CreatedAt:1759932829073431144,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a70ffabf-82a6-43e5-bbb6-0693c530d883,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84f1b5de273397b00ce40ecaef5d3248b4a58063583cffcb8f3d932bcec1b876,PodSandboxId:d74811a5186bf5d1991fa19e9333a06094d44dedf480404929eaaea7bb829664,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759932785790395306,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bf129346-930b-4f40-8ca4-15fa8630b971,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d151af1118d34737babf23a76fb041f10fb9b6bf1014480770affe28ca5f4bc7,PodSandboxId:2b329c582b203e75c2d625790c71e1621d9a2339278935af0c8d99a78a4d03ff,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759932775508993306,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-lwpdn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1fe3cbed-9633-4590-b9bb-e14a6199fbe8,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a1feba9d92fba4ef5b9cb7d70c79729766b95ee46855fd914f36fce36c906d5c,PodSandboxId:c995193ce75786afcf45df2bc874d60287c620432cf897d8edb2ab251fc0012b,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Sta
te:CONTAINER_EXITED,CreatedAt:1759932735344683929,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-g8ksq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4ec79cba-28ef-4b09-9b1c-c8037cce0dc3,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f9c986a54eb286cbfa1fcd4f234637ef2d6feee9f464f4b5c33bd459cc7615,PodSandboxId:715d958f1f58e033e472f40a854204e2833fe8ab4fe85b1a06ac5111fdf85dff,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d1697145
49c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759932734252051233,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-hcnc5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 781fdc78-4f0f-48da-b68d-3bf20f58b742,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46afc08045bce132666d2976946c5b39a15a8511d703ed37bfedeb60da091c68,PodSandboxId:fc4e2329fa74adbc738d87e56c5e2aaea7deb76d7bd11edc8650457f3277f80e,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38d
ca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1759932704729563633,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-fcgl2,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 73a321d7-cbc8-4559-a441-943c51fd3a20,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:644899e4eb1adf1310ac809aab5d3f50b398e72760dc36049dc8000d5dee1641,PodSandboxId:9710060fb1447c3a63f86fd46de8f7717a92ffba77826abbead473d9cca0f229,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759932673004477820,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84424f19-bc58-4bb1-9395-b41f8b2717e2,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96d4661330a8f5d463a7cee3be0e0f215d3bbe09ac8a0ec4ba97c588080da6b1,PodSandboxId:a728e7291218b955b1bdb7eea6f5e23cead1d4b3d75c4d794ee3b94dc6f501f9,Metadata:&ContainerMetadata{Name:amd-gpu
-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759932655908888046,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-6bmcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 011bcb5e-8b34-4d09-97eb-08d897d3141c,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae565ceabd6bef3b988b6d7a96cff339f5450fef1dda10f994aaddcc645af61d,PodSandboxId:ab46bf4b0bd815972ec0ae1da9237577bc3dfaa4172fb9f4128a51cfaeeaa32c,
Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759932634509820587,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a2868a1-5ef1-4d01-b26b-49b3ea8370a5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74f348635217b15b500c4a0ba5dfffb579df296c95a11b45b022e064c8debb71,PodSandboxId:d37558af1ea172cc5f53490d996548021769c973f8cd8d5f11d0e5c12f069190,Metadata:&Co
ntainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759932626793562633,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wpmqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65db61e2-0d47-4e9b-b9c2-6ea3a960de4d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ae8fc094fe7ab2be46cf484ffb9a767595d43ea2c71a48edc42de4cfd54b9c,PodSandboxId:d209b5508c2d4af570b4ce7322dc173e1c7d17f55c958d9fba9233d036600b3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759932626002123639,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6vjk6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f47cb93-a6a7-4a3c-9b32-2ad93ae128b8,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0673a329764cc44b3a910bc0ca0906db8f1c61f6d55cc76a1b9a74731e3d56be,PodSandboxId:bb1a9601e4b2c51574ab1702d68cc3150ff1f690b02b1843719317f390f2fd86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759932614002076305,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-527125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 801b365fccc8357a4badcc59de4a88db,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:373f68e0a309227a362c7738856634f04053dcbe5fc1a298f0980ceb71395867,PodSandboxId:59787c519bc49526d7a0cbd139173890e5fff73cd6f4de205973de6394633a21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759932613976046127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-527125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bc858eb29d9f81e70db16cbbe160731,},Annotations:map[string]st
ring{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01cccffee35cb20c4304160ba615287376fcd845cef44ad63c7b434bbbbf5f22,PodSandboxId:cf5aa74c2f4cc2649808e936344eee466a04a7ba3cbe29bdd90e411b2ade9354,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759932613984051809,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-527125,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: a38f6d3c859f7c25db05cab0ffc1d466,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12b9392e083be3060d0a95cc2607a3b8232190a724207ec624af25946d1a24b6,PodSandboxId:e65813c832b81eb443fbffd2df449b014ea9b2470a0a7a20483ea054b5ac72c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759932613923217074,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler
,io.kubernetes.pod.name: kube-scheduler-addons-527125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32a7bd3683ae37095704254da0d710b2,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3b2ae112-b7fe-433a-b716-b12873843949 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 14:16:13 addons-527125 crio[815]: time="2025-10-08 14:16:13.532130764Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0c9bf3bd-559f-4163-969d-f080afd54652 name=/runtime.v1.RuntimeService/Version
	Oct 08 14:16:13 addons-527125 crio[815]: time="2025-10-08 14:16:13.532221497Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0c9bf3bd-559f-4163-969d-f080afd54652 name=/runtime.v1.RuntimeService/Version
	Oct 08 14:16:13 addons-527125 crio[815]: time="2025-10-08 14:16:13.533509676Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f9d4c582-9e4b-4001-84ae-e39fdfb48ff1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 14:16:13 addons-527125 crio[815]: time="2025-10-08 14:16:13.534800359Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759932973534771562,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598010,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f9d4c582-9e4b-4001-84ae-e39fdfb48ff1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 14:16:13 addons-527125 crio[815]: time="2025-10-08 14:16:13.535596283Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=650eabc9-a892-4ec4-8529-064326644915 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 14:16:13 addons-527125 crio[815]: time="2025-10-08 14:16:13.535883804Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=650eabc9-a892-4ec4-8529-064326644915 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 14:16:13 addons-527125 crio[815]: time="2025-10-08 14:16:13.536796697Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd4f0d75529eefa44460740ea3170f40fd916bb7b99ec24d1aca32585db3715c,PodSandboxId:c8a12936349419f185fd6ff5dcf4d5c54286a6a646928c7f1d81067e87ae2c11,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:5105ff1d2f8c81a83c335cee9bf466f452e33d7ea58ef0e7065143fb761485ab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a63019652e24443f18e4806cae975591a737588479e88047f8c4e11991819d24,State:CONTAINER_RUNNING,CreatedAt:1759932829073431144,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a70ffabf-82a6-43e5-bbb6-0693c530d883,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84f1b5de273397b00ce40ecaef5d3248b4a58063583cffcb8f3d932bcec1b876,PodSandboxId:d74811a5186bf5d1991fa19e9333a06094d44dedf480404929eaaea7bb829664,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759932785790395306,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bf129346-930b-4f40-8ca4-15fa8630b971,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d151af1118d34737babf23a76fb041f10fb9b6bf1014480770affe28ca5f4bc7,PodSandboxId:2b329c582b203e75c2d625790c71e1621d9a2339278935af0c8d99a78a4d03ff,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759932775508993306,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-lwpdn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1fe3cbed-9633-4590-b9bb-e14a6199fbe8,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a1feba9d92fba4ef5b9cb7d70c79729766b95ee46855fd914f36fce36c906d5c,PodSandboxId:c995193ce75786afcf45df2bc874d60287c620432cf897d8edb2ab251fc0012b,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Sta
te:CONTAINER_EXITED,CreatedAt:1759932735344683929,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-g8ksq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4ec79cba-28ef-4b09-9b1c-c8037cce0dc3,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f9c986a54eb286cbfa1fcd4f234637ef2d6feee9f464f4b5c33bd459cc7615,PodSandboxId:715d958f1f58e033e472f40a854204e2833fe8ab4fe85b1a06ac5111fdf85dff,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d1697145
49c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759932734252051233,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-hcnc5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 781fdc78-4f0f-48da-b68d-3bf20f58b742,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46afc08045bce132666d2976946c5b39a15a8511d703ed37bfedeb60da091c68,PodSandboxId:fc4e2329fa74adbc738d87e56c5e2aaea7deb76d7bd11edc8650457f3277f80e,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38d
ca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1759932704729563633,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-fcgl2,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 73a321d7-cbc8-4559-a441-943c51fd3a20,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:644899e4eb1adf1310ac809aab5d3f50b398e72760dc36049dc8000d5dee1641,PodSandboxId:9710060fb1447c3a63f86fd46de8f7717a92ffba77826abbead473d9cca0f229,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759932673004477820,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84424f19-bc58-4bb1-9395-b41f8b2717e2,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96d4661330a8f5d463a7cee3be0e0f215d3bbe09ac8a0ec4ba97c588080da6b1,PodSandboxId:a728e7291218b955b1bdb7eea6f5e23cead1d4b3d75c4d794ee3b94dc6f501f9,Metadata:&ContainerMetadata{Name:amd-gpu
-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759932655908888046,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-6bmcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 011bcb5e-8b34-4d09-97eb-08d897d3141c,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae565ceabd6bef3b988b6d7a96cff339f5450fef1dda10f994aaddcc645af61d,PodSandboxId:ab46bf4b0bd815972ec0ae1da9237577bc3dfaa4172fb9f4128a51cfaeeaa32c,
Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759932634509820587,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a2868a1-5ef1-4d01-b26b-49b3ea8370a5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74f348635217b15b500c4a0ba5dfffb579df296c95a11b45b022e064c8debb71,PodSandboxId:d37558af1ea172cc5f53490d996548021769c973f8cd8d5f11d0e5c12f069190,Metadata:&Co
ntainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759932626793562633,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-wpmqj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65db61e2-0d47-4e9b-b9c2-6ea3a960de4d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ae8fc094fe7ab2be46cf484ffb9a767595d43ea2c71a48edc42de4cfd54b9c,PodSandboxId:d209b5508c2d4af570b4ce7322dc173e1c7d17f55c958d9fba9233d036600b3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759932626002123639,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6vjk6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f47cb93-a6a7-4a3c-9b32-2ad93ae128b8,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0673a329764cc44b3a910bc0ca0906db8f1c61f6d55cc76a1b9a74731e3d56be,PodSandboxId:bb1a9601e4b2c51574ab1702d68cc3150ff1f690b02b1843719317f390f2fd86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759932614002076305,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-527125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 801b365fccc8357a4badcc59de4a88db,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:373f68e0a309227a362c7738856634f04053dcbe5fc1a298f0980ceb71395867,PodSandboxId:59787c519bc49526d7a0cbd139173890e5fff73cd6f4de205973de6394633a21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759932613976046127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-527125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bc858eb29d9f81e70db16cbbe160731,},Annotations:map[string]st
ring{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01cccffee35cb20c4304160ba615287376fcd845cef44ad63c7b434bbbbf5f22,PodSandboxId:cf5aa74c2f4cc2649808e936344eee466a04a7ba3cbe29bdd90e411b2ade9354,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759932613984051809,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-527125,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: a38f6d3c859f7c25db05cab0ffc1d466,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12b9392e083be3060d0a95cc2607a3b8232190a724207ec624af25946d1a24b6,PodSandboxId:e65813c832b81eb443fbffd2df449b014ea9b2470a0a7a20483ea054b5ac72c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759932613923217074,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler
,io.kubernetes.pod.name: kube-scheduler-addons-527125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32a7bd3683ae37095704254da0d710b2,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=650eabc9-a892-4ec4-8529-064326644915 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fd4f0d75529ee       docker.io/library/nginx@sha256:5105ff1d2f8c81a83c335cee9bf466f452e33d7ea58ef0e7065143fb761485ab                              2 minutes ago       Running             nginx                     0                   c8a1293634941       nginx
	84f1b5de27339       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   d74811a5186bf       busybox
	d151af1118d34       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             3 minutes ago       Running             controller                0                   2b329c582b203       ingress-nginx-controller-9cc49f96f-lwpdn
	a1feba9d92fba       8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65                                                             3 minutes ago       Exited              patch                     1                   c995193ce7578       ingress-nginx-admission-patch-g8ksq
	35f9c986a54eb       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   3 minutes ago       Exited              create                    0                   715d958f1f58e       ingress-nginx-admission-create-hcnc5
	46afc08045bce       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb            4 minutes ago       Running             gadget                    0                   fc4e2329fa74a       gadget-fcgl2
	644899e4eb1ad       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               5 minutes ago       Running             minikube-ingress-dns      0                   9710060fb1447       kube-ingress-dns-minikube
	96d4661330a8f       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     5 minutes ago       Running             amd-gpu-device-plugin     0                   a728e7291218b       amd-gpu-device-plugin-6bmcm
	ae565ceabd6be       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   ab46bf4b0bd81       storage-provisioner
	74f348635217b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             5 minutes ago       Running             coredns                   0                   d37558af1ea17       coredns-66bc5c9577-wpmqj
	a7ae8fc094fe7       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             5 minutes ago       Running             kube-proxy                0                   d209b5508c2d4       kube-proxy-6vjk6
	0673a329764cc       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             5 minutes ago       Running             kube-controller-manager   0                   bb1a9601e4b2c       kube-controller-manager-addons-527125
	01cccffee35cb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             5 minutes ago       Running             etcd                      0                   cf5aa74c2f4cc       etcd-addons-527125
	373f68e0a3092       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             5 minutes ago       Running             kube-apiserver            0                   59787c519bc49       kube-apiserver-addons-527125
	12b9392e083be       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             5 minutes ago       Running             kube-scheduler            0                   e65813c832b81       kube-scheduler-addons-527125
	
	
	==> coredns [74f348635217b15b500c4a0ba5dfffb579df296c95a11b45b022e064c8debb71] <==
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	[INFO] Reloading complete
	[INFO] 10.244.0.23:47978 - 18447 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000406742s
	[INFO] 10.244.0.23:54991 - 4083 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000943265s
	[INFO] 10.244.0.23:49310 - 24677 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000155748s
	[INFO] 10.244.0.23:58184 - 51186 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00051244s
	[INFO] 10.244.0.23:50666 - 13185 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000640683s
	[INFO] 10.244.0.23:59229 - 13929 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00007224s
	[INFO] 10.244.0.23:53583 - 59064 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001269559s
	[INFO] 10.244.0.23:32948 - 11383 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.004495224s
	[INFO] 10.244.0.26:46923 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00068057s
	[INFO] 10.244.0.26:38870 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000204596s
	
	
	==> describe nodes <==
	Name:               addons-527125
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-527125
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555
	                    minikube.k8s.io/name=addons-527125
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_08T14_10_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-527125
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 08 Oct 2025 14:10:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-527125
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 08 Oct 2025 14:16:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Oct 2025 14:14:25 +0000   Wed, 08 Oct 2025 14:10:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Oct 2025 14:14:25 +0000   Wed, 08 Oct 2025 14:10:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Oct 2025 14:14:25 +0000   Wed, 08 Oct 2025 14:10:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 08 Oct 2025 14:14:25 +0000   Wed, 08 Oct 2025 14:10:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.51
	  Hostname:    addons-527125
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 32f9250397884783924a0b79d559c3f2
	  System UUID:                32f92503-9788-4783-924a-0b79d559c3f2
	  Boot ID:                    b26e0970-30b0-4762-8b28-c17313967cc4
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m11s
	  default                     hello-world-app-5d498dc89-jxzh5             0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m34s
	  gadget                      gadget-fcgl2                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m41s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-lwpdn    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m40s
	  kube-system                 amd-gpu-device-plugin-6bmcm                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m45s
	  kube-system                 coredns-66bc5c9577-wpmqj                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m48s
	  kube-system                 etcd-addons-527125                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m55s
	  kube-system                 kube-apiserver-addons-527125                250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m53s
	  kube-system                 kube-controller-manager-addons-527125       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m53s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m43s
	  kube-system                 kube-proxy-6vjk6                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m48s
	  kube-system                 kube-scheduler-addons-527125                100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m53s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 5m46s              kube-proxy       
	  Normal  Starting                 6m1s               kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m1s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m (x8 over 6m1s)  kubelet          Node addons-527125 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m (x8 over 6m1s)  kubelet          Node addons-527125 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m (x7 over 6m1s)  kubelet          Node addons-527125 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m54s              kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m53s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m53s              kubelet          Node addons-527125 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m53s              kubelet          Node addons-527125 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m53s              kubelet          Node addons-527125 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m52s              kubelet          Node addons-527125 status is now: NodeReady
	  Normal  RegisteredNode           5m49s              node-controller  Node addons-527125 event: Registered Node addons-527125 in Controller
	
	
	==> dmesg <==
	[  +5.235005] kauditd_printk_skb: 11 callbacks suppressed
	[  +2.106328] kauditd_printk_skb: 11 callbacks suppressed
	[  +7.725077] kauditd_printk_skb: 32 callbacks suppressed
	[  +8.638748] kauditd_printk_skb: 26 callbacks suppressed
	[ +11.359305] kauditd_printk_skb: 11 callbacks suppressed
	[Oct 8 14:12] kauditd_printk_skb: 11 callbacks suppressed
	[  +2.350804] kauditd_printk_skb: 106 callbacks suppressed
	[  +0.987563] kauditd_printk_skb: 119 callbacks suppressed
	[  +3.689008] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.342567] kauditd_printk_skb: 35 callbacks suppressed
	[  +0.000042] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.000027] kauditd_printk_skb: 29 callbacks suppressed
	[Oct 8 14:13] kauditd_printk_skb: 74 callbacks suppressed
	[  +5.276874] kauditd_printk_skb: 47 callbacks suppressed
	[  +6.127490] kauditd_printk_skb: 22 callbacks suppressed
	[  +6.025660] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.000564] kauditd_printk_skb: 63 callbacks suppressed
	[  +3.492949] kauditd_printk_skb: 135 callbacks suppressed
	[  +2.937297] kauditd_printk_skb: 101 callbacks suppressed
	[  +1.432532] kauditd_printk_skb: 88 callbacks suppressed
	[  +2.606676] kauditd_printk_skb: 87 callbacks suppressed
	[Oct 8 14:14] kauditd_printk_skb: 145 callbacks suppressed
	[  +0.390011] kauditd_printk_skb: 10 callbacks suppressed
	[ +19.579947] kauditd_printk_skb: 109 callbacks suppressed
	[Oct 8 14:16] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [01cccffee35cb20c4304160ba615287376fcd845cef44ad63c7b434bbbbf5f22] <==
	{"level":"info","ts":"2025-10-08T14:12:18.062962Z","caller":"traceutil/trace.go:172","msg":"trace[1932243925] range","detail":"{range_begin:/registry/events/kube-system/amd-gpu-device-plugin-6bmcm.186c8966b1477e89; range_end:; response_count:1; response_revision:1119; }","duration":"137.555046ms","start":"2025-10-08T14:12:17.925401Z","end":"2025-10-08T14:12:18.062956Z","steps":["trace[1932243925] 'agreement among raft nodes before linearized reading'  (duration: 137.478672ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-08T14:12:22.306820Z","caller":"traceutil/trace.go:172","msg":"trace[1694946395] linearizableReadLoop","detail":"{readStateIndex:1203; appliedIndex:1203; }","duration":"166.865126ms","start":"2025-10-08T14:12:22.139929Z","end":"2025-10-08T14:12:22.306794Z","steps":["trace[1694946395] 'read index received'  (duration: 166.859182ms)","trace[1694946395] 'applied index is now lower than readState.Index'  (duration: 4.292µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-08T14:12:22.306921Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"166.973385ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-08T14:12:22.306940Z","caller":"traceutil/trace.go:172","msg":"trace[456792196] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1161; }","duration":"167.007338ms","start":"2025-10-08T14:12:22.139925Z","end":"2025-10-08T14:12:22.306933Z","steps":["trace[456792196] 'agreement among raft nodes before linearized reading'  (duration: 166.942712ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-08T14:12:22.306983Z","caller":"traceutil/trace.go:172","msg":"trace[1277793610] transaction","detail":"{read_only:false; response_revision:1162; number_of_response:1; }","duration":"211.672891ms","start":"2025-10-08T14:12:22.095300Z","end":"2025-10-08T14:12:22.306972Z","steps":["trace[1277793610] 'process raft request'  (duration: 211.564105ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-08T14:12:43.720188Z","caller":"traceutil/trace.go:172","msg":"trace[721499335] linearizableReadLoop","detail":"{readStateIndex:1282; appliedIndex:1282; }","duration":"239.806833ms","start":"2025-10-08T14:12:43.480362Z","end":"2025-10-08T14:12:43.720169Z","steps":["trace[721499335] 'read index received'  (duration: 239.798781ms)","trace[721499335] 'applied index is now lower than readState.Index'  (duration: 6.917µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-08T14:12:43.720390Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"240.061007ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-08T14:12:43.720410Z","caller":"traceutil/trace.go:172","msg":"trace[699473661] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1236; }","duration":"240.110955ms","start":"2025-10-08T14:12:43.480293Z","end":"2025-10-08T14:12:43.720404Z","steps":["trace[699473661] 'agreement among raft nodes before linearized reading'  (duration: 240.029164ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-08T14:12:43.720970Z","caller":"traceutil/trace.go:172","msg":"trace[452448184] transaction","detail":"{read_only:false; response_revision:1237; number_of_response:1; }","duration":"271.542275ms","start":"2025-10-08T14:12:43.449419Z","end":"2025-10-08T14:12:43.720961Z","steps":["trace[452448184] 'process raft request'  (duration: 271.341932ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-08T14:12:43.724283Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.922788ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-08T14:12:43.724366Z","caller":"traceutil/trace.go:172","msg":"trace[868520465] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1237; }","duration":"108.018785ms","start":"2025-10-08T14:12:43.616337Z","end":"2025-10-08T14:12:43.724356Z","steps":["trace[868520465] 'agreement among raft nodes before linearized reading'  (duration: 107.900225ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-08T14:12:43.724427Z","caller":"traceutil/trace.go:172","msg":"trace[1454255670] transaction","detail":"{read_only:false; response_revision:1238; number_of_response:1; }","duration":"152.820412ms","start":"2025-10-08T14:12:43.571596Z","end":"2025-10-08T14:12:43.724417Z","steps":["trace[1454255670] 'process raft request'  (duration: 152.743303ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-08T14:12:53.387676Z","caller":"traceutil/trace.go:172","msg":"trace[1560503836] linearizableReadLoop","detail":"{readStateIndex:1297; appliedIndex:1297; }","duration":"190.496119ms","start":"2025-10-08T14:12:53.197148Z","end":"2025-10-08T14:12:53.387644Z","steps":["trace[1560503836] 'read index received'  (duration: 190.488776ms)","trace[1560503836] 'applied index is now lower than readState.Index'  (duration: 6.415µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-08T14:12:53.387867Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"190.690228ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumes\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-08T14:12:53.387865Z","caller":"traceutil/trace.go:172","msg":"trace[44192862] transaction","detail":"{read_only:false; response_revision:1250; number_of_response:1; }","duration":"253.791078ms","start":"2025-10-08T14:12:53.134057Z","end":"2025-10-08T14:12:53.387848Z","steps":["trace[44192862] 'process raft request'  (duration: 253.622062ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-08T14:12:53.387896Z","caller":"traceutil/trace.go:172","msg":"trace[1467363097] range","detail":"{range_begin:/registry/persistentvolumes; range_end:; response_count:0; response_revision:1249; }","duration":"190.752454ms","start":"2025-10-08T14:12:53.197129Z","end":"2025-10-08T14:12:53.387882Z","steps":["trace[1467363097] 'agreement among raft nodes before linearized reading'  (duration: 190.665244ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-08T14:12:54.778916Z","caller":"traceutil/trace.go:172","msg":"trace[379866371] linearizableReadLoop","detail":"{readStateIndex:1305; appliedIndex:1305; }","duration":"163.131196ms","start":"2025-10-08T14:12:54.615766Z","end":"2025-10-08T14:12:54.778898Z","steps":["trace[379866371] 'read index received'  (duration: 163.125432ms)","trace[379866371] 'applied index is now lower than readState.Index'  (duration: 4.692µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-08T14:12:54.779017Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"163.30414ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-08T14:12:54.779037Z","caller":"traceutil/trace.go:172","msg":"trace[604687684] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1256; }","duration":"163.341602ms","start":"2025-10-08T14:12:54.615690Z","end":"2025-10-08T14:12:54.779031Z","steps":["trace[604687684] 'agreement among raft nodes before linearized reading'  (duration: 163.282054ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-08T14:12:54.779163Z","caller":"traceutil/trace.go:172","msg":"trace[793000520] transaction","detail":"{read_only:false; response_revision:1257; number_of_response:1; }","duration":"265.249397ms","start":"2025-10-08T14:12:54.513876Z","end":"2025-10-08T14:12:54.779126Z","steps":["trace[793000520] 'process raft request'  (duration: 265.112609ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-08T14:13:24.598533Z","caller":"traceutil/trace.go:172","msg":"trace[1659933056] transaction","detail":"{read_only:false; response_revision:1437; number_of_response:1; }","duration":"218.432063ms","start":"2025-10-08T14:13:24.380082Z","end":"2025-10-08T14:13:24.598514Z","steps":["trace[1659933056] 'process raft request'  (duration: 218.307827ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-08T14:13:26.949781Z","caller":"traceutil/trace.go:172","msg":"trace[818422244] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1440; }","duration":"166.74217ms","start":"2025-10-08T14:13:26.783001Z","end":"2025-10-08T14:13:26.949743Z","steps":["trace[818422244] 'process raft request'  (duration: 166.612993ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-08T14:13:32.610424Z","caller":"traceutil/trace.go:172","msg":"trace[454854483] transaction","detail":"{read_only:false; response_revision:1497; number_of_response:1; }","duration":"341.503251ms","start":"2025-10-08T14:13:32.268907Z","end":"2025-10-08T14:13:32.610411Z","steps":["trace[454854483] 'process raft request'  (duration: 341.406912ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-08T14:13:32.610588Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-08T14:13:32.268891Z","time spent":"341.598111ms","remote":"127.0.0.1:46084","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1318,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/default/registry-test\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/default/registry-test\" value_size:1274 >> failure:<>"}
	{"level":"info","ts":"2025-10-08T14:13:43.528167Z","caller":"traceutil/trace.go:172","msg":"trace[852969232] transaction","detail":"{read_only:false; response_revision:1622; number_of_response:1; }","duration":"184.090498ms","start":"2025-10-08T14:13:43.344063Z","end":"2025-10-08T14:13:43.528154Z","steps":["trace[852969232] 'process raft request'  (duration: 184.014497ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:16:13 up 6 min,  0 users,  load average: 0.59, 1.41, 0.79
	Linux addons-527125 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [373f68e0a309227a362c7738856634f04053dcbe5fc1a298f0980ceb71395867] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1008 14:11:26.741808       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1008 14:13:11.427172       1 conn.go:339] Error on socket receive: read tcp 192.168.39.51:8443->192.168.39.1:53338: use of closed network connection
	E1008 14:13:11.624131       1 conn.go:339] Error on socket receive: read tcp 192.168.39.51:8443->192.168.39.1:45402: use of closed network connection
	I1008 14:13:21.008577       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.232.139"}
	I1008 14:13:27.727242       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1008 14:13:39.625065       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1008 14:13:39.896503       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.107.148"}
	I1008 14:13:52.080672       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1008 14:14:09.487476       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1008 14:14:09.487617       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1008 14:14:09.522609       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1008 14:14:09.523024       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1008 14:14:09.532224       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1008 14:14:09.532400       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1008 14:14:09.563362       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1008 14:14:09.563471       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1008 14:14:09.599621       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1008 14:14:09.599669       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1008 14:14:10.532890       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1008 14:14:10.600025       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1008 14:14:10.728379       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E1008 14:14:16.250288       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1008 14:16:12.165001       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.185.173"}
	
	
	==> kube-controller-manager [0673a329764cc44b3a910bc0ca0906db8f1c61f6d55cc76a1b9a74731e3d56be] <==
	I1008 14:14:24.236739       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1008 14:14:24.258157       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1008 14:14:24.258211       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1008 14:14:26.401896       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1008 14:14:26.403226       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1008 14:14:29.255145       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1008 14:14:29.256308       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1008 14:14:30.908005       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1008 14:14:30.909090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1008 14:14:44.456477       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1008 14:14:44.457929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1008 14:14:48.039116       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1008 14:14:48.040137       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1008 14:14:48.990542       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1008 14:14:48.993209       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1008 14:15:18.596035       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1008 14:15:18.597044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1008 14:15:32.823495       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1008 14:15:32.824669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1008 14:15:36.506646       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1008 14:15:36.507806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1008 14:15:49.445648       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1008 14:15:49.446853       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1008 14:16:11.288301       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1008 14:16:11.289666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [a7ae8fc094fe7ab2be46cf484ffb9a767595d43ea2c71a48edc42de4cfd54b9c] <==
	I1008 14:10:26.746918       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1008 14:10:26.949831       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1008 14:10:26.949865       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.51"]
	E1008 14:10:26.949945       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 14:10:27.221575       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1008 14:10:27.221923       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1008 14:10:27.222877       1 server_linux.go:132] "Using iptables Proxier"
	I1008 14:10:27.246931       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 14:10:27.247219       1 server.go:527] "Version info" version="v1.34.1"
	I1008 14:10:27.247230       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 14:10:27.265426       1 config.go:200] "Starting service config controller"
	I1008 14:10:27.273192       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1008 14:10:27.268477       1 config.go:403] "Starting serviceCIDR config controller"
	I1008 14:10:27.276824       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1008 14:10:27.270011       1 config.go:309] "Starting node config controller"
	I1008 14:10:27.268467       1 config.go:106] "Starting endpoint slice config controller"
	I1008 14:10:27.276884       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1008 14:10:27.276897       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1008 14:10:27.276901       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1008 14:10:27.375889       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1008 14:10:27.377075       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1008 14:10:27.377238       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [12b9392e083be3060d0a95cc2607a3b8232190a724207ec624af25946d1a24b6] <==
	E1008 14:10:17.157558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1008 14:10:17.158217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1008 14:10:17.159536       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1008 14:10:17.160581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1008 14:10:17.160628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1008 14:10:17.160670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1008 14:10:17.160751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1008 14:10:17.160798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1008 14:10:17.164064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1008 14:10:17.164125       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1008 14:10:17.164168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1008 14:10:17.164249       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1008 14:10:17.164295       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1008 14:10:18.002385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1008 14:10:18.017431       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1008 14:10:18.091481       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1008 14:10:18.142509       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1008 14:10:18.160565       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1008 14:10:18.174936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1008 14:10:18.179073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1008 14:10:18.179826       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1008 14:10:18.265002       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1008 14:10:18.266259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1008 14:10:18.576236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1008 14:10:21.241155       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 08 14:14:31 addons-527125 kubelet[1496]: I1008 14:14:31.954686    1496 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf5931d243a65de52dfd1cde9bd74d5fb4cb0598bae4be1c16839881d4413ea7"} err="failed to get container status \"cf5931d243a65de52dfd1cde9bd74d5fb4cb0598bae4be1c16839881d4413ea7\": rpc error: code = NotFound desc = could not find container \"cf5931d243a65de52dfd1cde9bd74d5fb4cb0598bae4be1c16839881d4413ea7\": container with ID starting with cf5931d243a65de52dfd1cde9bd74d5fb4cb0598bae4be1c16839881d4413ea7 not found: ID does not exist"
	Oct 08 14:14:33 addons-527125 kubelet[1496]: I1008 14:14:33.922294    1496 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-6bmcm" secret="" err="secret \"gcp-auth\" not found"
	Oct 08 14:14:40 addons-527125 kubelet[1496]: E1008 14:14:40.469625    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759932880469084356  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 08 14:14:40 addons-527125 kubelet[1496]: E1008 14:14:40.469676    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759932880469084356  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 08 14:14:50 addons-527125 kubelet[1496]: E1008 14:14:50.473367    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759932890472870360  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 08 14:14:50 addons-527125 kubelet[1496]: E1008 14:14:50.473425    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759932890472870360  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 08 14:15:00 addons-527125 kubelet[1496]: E1008 14:15:00.476745    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759932900476333171  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 08 14:15:00 addons-527125 kubelet[1496]: E1008 14:15:00.476772    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759932900476333171  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 08 14:15:10 addons-527125 kubelet[1496]: E1008 14:15:10.479902    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759932910479456070  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 08 14:15:10 addons-527125 kubelet[1496]: E1008 14:15:10.479946    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759932910479456070  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 08 14:15:12 addons-527125 kubelet[1496]: I1008 14:15:12.921478    1496 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 08 14:15:20 addons-527125 kubelet[1496]: E1008 14:15:20.482957    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759932920482319981  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 08 14:15:20 addons-527125 kubelet[1496]: E1008 14:15:20.482985    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759932920482319981  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 08 14:15:30 addons-527125 kubelet[1496]: E1008 14:15:30.487039    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759932930486307587  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 08 14:15:30 addons-527125 kubelet[1496]: E1008 14:15:30.487079    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759932930486307587  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 08 14:15:40 addons-527125 kubelet[1496]: E1008 14:15:40.490975    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759932940490420295  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 08 14:15:40 addons-527125 kubelet[1496]: E1008 14:15:40.491034    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759932940490420295  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 08 14:15:43 addons-527125 kubelet[1496]: I1008 14:15:43.921128    1496 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-6bmcm" secret="" err="secret \"gcp-auth\" not found"
	Oct 08 14:15:50 addons-527125 kubelet[1496]: E1008 14:15:50.494045    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759932950493530389  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 08 14:15:50 addons-527125 kubelet[1496]: E1008 14:15:50.494089    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759932950493530389  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 08 14:16:00 addons-527125 kubelet[1496]: E1008 14:16:00.498691    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759932960498172671  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 08 14:16:00 addons-527125 kubelet[1496]: E1008 14:16:00.498781    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759932960498172671  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 08 14:16:10 addons-527125 kubelet[1496]: E1008 14:16:10.503550    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759932970502835875  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 08 14:16:10 addons-527125 kubelet[1496]: E1008 14:16:10.503652    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759932970502835875  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 08 14:16:12 addons-527125 kubelet[1496]: I1008 14:16:12.217397    1496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d2zx\" (UniqueName: \"kubernetes.io/projected/63b222c9-ca4a-44a3-908a-17947fe2c78e-kube-api-access-4d2zx\") pod \"hello-world-app-5d498dc89-jxzh5\" (UID: \"63b222c9-ca4a-44a3-908a-17947fe2c78e\") " pod="default/hello-world-app-5d498dc89-jxzh5"
	
	
	==> storage-provisioner [ae565ceabd6bef3b988b6d7a96cff339f5450fef1dda10f994aaddcc645af61d] <==
	W1008 14:15:48.327165       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 14:15:50.331297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 14:15:50.337072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 14:15:52.341195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 14:15:52.346152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 14:15:54.349859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 14:15:54.355467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 14:15:56.359354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 14:15:56.365043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 14:15:58.368497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 14:15:58.375157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 14:16:00.379358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 14:16:00.386728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 14:16:02.390457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 14:16:02.396280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 14:16:04.400121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 14:16:04.406567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 14:16:06.410529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 14:16:06.416619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 14:16:08.419905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 14:16:08.425208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 14:16:10.429796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 14:16:10.440458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 14:16:12.449401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 14:16:12.461676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-527125 -n addons-527125
helpers_test.go:269: (dbg) Run:  kubectl --context addons-527125 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-jxzh5 ingress-nginx-admission-create-hcnc5 ingress-nginx-admission-patch-g8ksq
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-527125 describe pod hello-world-app-5d498dc89-jxzh5 ingress-nginx-admission-create-hcnc5 ingress-nginx-admission-patch-g8ksq
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-527125 describe pod hello-world-app-5d498dc89-jxzh5 ingress-nginx-admission-create-hcnc5 ingress-nginx-admission-patch-g8ksq: exit status 1 (88.881027ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-jxzh5
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-527125/192.168.39.51
	Start Time:       Wed, 08 Oct 2025 14:16:12 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4d2zx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-4d2zx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-jxzh5 to addons-527125
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-hcnc5" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-g8ksq" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-527125 describe pod hello-world-app-5d498dc89-jxzh5 ingress-nginx-admission-create-hcnc5 ingress-nginx-admission-patch-g8ksq: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-527125 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-527125 addons disable ingress-dns --alsologtostderr -v=1: (1.57275942s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-527125 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-527125 addons disable ingress --alsologtostderr -v=1: (7.808117102s)
--- FAIL: TestAddons/parallel/Ingress (164.93s)

                                                
                                    
x
+
TestPreload (162.98s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-105038 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0
E1008 15:03:02.310336  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-105038 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0: (1m35.203491194s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-105038 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-105038 image pull gcr.io/k8s-minikube/busybox: (3.415620402s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-105038
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-105038: (7.212714048s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-105038 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-105038 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (53.974569875s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-105038 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-10-08 15:04:17.529126532 +0000 UTC m=+3311.538224856
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-105038 -n test-preload-105038
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-105038 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-105038 logs -n 25: (1.185732492s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                        ARGS                                                                                         │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-454917 ssh -n multinode-454917-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-454917     │ jenkins │ v1.37.0 │ 08 Oct 25 14:50 UTC │ 08 Oct 25 14:50 UTC │
	│ ssh     │ multinode-454917 ssh -n multinode-454917 sudo cat /home/docker/cp-test_multinode-454917-m03_multinode-454917.txt                                                                    │ multinode-454917     │ jenkins │ v1.37.0 │ 08 Oct 25 14:50 UTC │ 08 Oct 25 14:50 UTC │
	│ cp      │ multinode-454917 cp multinode-454917-m03:/home/docker/cp-test.txt multinode-454917-m02:/home/docker/cp-test_multinode-454917-m03_multinode-454917-m02.txt                           │ multinode-454917     │ jenkins │ v1.37.0 │ 08 Oct 25 14:50 UTC │ 08 Oct 25 14:50 UTC │
	│ ssh     │ multinode-454917 ssh -n multinode-454917-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-454917     │ jenkins │ v1.37.0 │ 08 Oct 25 14:50 UTC │ 08 Oct 25 14:50 UTC │
	│ ssh     │ multinode-454917 ssh -n multinode-454917-m02 sudo cat /home/docker/cp-test_multinode-454917-m03_multinode-454917-m02.txt                                                            │ multinode-454917     │ jenkins │ v1.37.0 │ 08 Oct 25 14:50 UTC │ 08 Oct 25 14:50 UTC │
	│ node    │ multinode-454917 node stop m03                                                                                                                                                      │ multinode-454917     │ jenkins │ v1.37.0 │ 08 Oct 25 14:50 UTC │ 08 Oct 25 14:50 UTC │
	│ node    │ multinode-454917 node start m03 -v=5 --alsologtostderr                                                                                                                              │ multinode-454917     │ jenkins │ v1.37.0 │ 08 Oct 25 14:50 UTC │ 08 Oct 25 14:50 UTC │
	│ node    │ list -p multinode-454917                                                                                                                                                            │ multinode-454917     │ jenkins │ v1.37.0 │ 08 Oct 25 14:50 UTC │                     │
	│ stop    │ -p multinode-454917                                                                                                                                                                 │ multinode-454917     │ jenkins │ v1.37.0 │ 08 Oct 25 14:50 UTC │ 08 Oct 25 14:53 UTC │
	│ start   │ -p multinode-454917 --wait=true -v=5 --alsologtostderr                                                                                                                              │ multinode-454917     │ jenkins │ v1.37.0 │ 08 Oct 25 14:53 UTC │ 08 Oct 25 14:55 UTC │
	│ node    │ list -p multinode-454917                                                                                                                                                            │ multinode-454917     │ jenkins │ v1.37.0 │ 08 Oct 25 14:55 UTC │                     │
	│ node    │ multinode-454917 node delete m03                                                                                                                                                    │ multinode-454917     │ jenkins │ v1.37.0 │ 08 Oct 25 14:55 UTC │ 08 Oct 25 14:56 UTC │
	│ stop    │ multinode-454917 stop                                                                                                                                                               │ multinode-454917     │ jenkins │ v1.37.0 │ 08 Oct 25 14:56 UTC │ 08 Oct 25 14:58 UTC │
	│ start   │ -p multinode-454917 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                          │ multinode-454917     │ jenkins │ v1.37.0 │ 08 Oct 25 14:58 UTC │ 08 Oct 25 15:00 UTC │
	│ node    │ list -p multinode-454917                                                                                                                                                            │ multinode-454917     │ jenkins │ v1.37.0 │ 08 Oct 25 15:00 UTC │                     │
	│ start   │ -p multinode-454917-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-454917-m02 │ jenkins │ v1.37.0 │ 08 Oct 25 15:00 UTC │                     │
	│ start   │ -p multinode-454917-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-454917-m03 │ jenkins │ v1.37.0 │ 08 Oct 25 15:00 UTC │ 08 Oct 25 15:01 UTC │
	│ node    │ add -p multinode-454917                                                                                                                                                             │ multinode-454917     │ jenkins │ v1.37.0 │ 08 Oct 25 15:01 UTC │                     │
	│ delete  │ -p multinode-454917-m03                                                                                                                                                             │ multinode-454917-m03 │ jenkins │ v1.37.0 │ 08 Oct 25 15:01 UTC │ 08 Oct 25 15:01 UTC │
	│ delete  │ -p multinode-454917                                                                                                                                                                 │ multinode-454917     │ jenkins │ v1.37.0 │ 08 Oct 25 15:01 UTC │ 08 Oct 25 15:01 UTC │
	│ start   │ -p test-preload-105038 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0 │ test-preload-105038  │ jenkins │ v1.37.0 │ 08 Oct 25 15:01 UTC │ 08 Oct 25 15:03 UTC │
	│ image   │ test-preload-105038 image pull gcr.io/k8s-minikube/busybox                                                                                                                          │ test-preload-105038  │ jenkins │ v1.37.0 │ 08 Oct 25 15:03 UTC │ 08 Oct 25 15:03 UTC │
	│ stop    │ -p test-preload-105038                                                                                                                                                              │ test-preload-105038  │ jenkins │ v1.37.0 │ 08 Oct 25 15:03 UTC │ 08 Oct 25 15:03 UTC │
	│ start   │ -p test-preload-105038 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                         │ test-preload-105038  │ jenkins │ v1.37.0 │ 08 Oct 25 15:03 UTC │ 08 Oct 25 15:04 UTC │
	│ image   │ test-preload-105038 image list                                                                                                                                                      │ test-preload-105038  │ jenkins │ v1.37.0 │ 08 Oct 25 15:04 UTC │ 08 Oct 25 15:04 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 15:03:23
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 15:03:23.378508  392791 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:03:23.378771  392791 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:03:23.378779  392791 out.go:374] Setting ErrFile to fd 2...
	I1008 15:03:23.378783  392791 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:03:23.379032  392791 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-357044/.minikube/bin
	I1008 15:03:23.379507  392791 out.go:368] Setting JSON to false
	I1008 15:03:23.380431  392791 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6335,"bootTime":1759929468,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 15:03:23.380528  392791 start.go:141] virtualization: kvm guest
	I1008 15:03:23.382539  392791 out.go:179] * [test-preload-105038] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 15:03:23.383913  392791 notify.go:220] Checking for updates...
	I1008 15:03:23.383935  392791 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 15:03:23.385135  392791 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 15:03:23.386349  392791 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-357044/kubeconfig
	I1008 15:03:23.387645  392791 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-357044/.minikube
	I1008 15:03:23.388778  392791 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 15:03:23.389850  392791 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 15:03:23.391223  392791 config.go:182] Loaded profile config "test-preload-105038": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1008 15:03:23.391694  392791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 15:03:23.391775  392791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 15:03:23.409979  392791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35607
	I1008 15:03:23.410525  392791 main.go:141] libmachine: () Calling .GetVersion
	I1008 15:03:23.411085  392791 main.go:141] libmachine: Using API Version  1
	I1008 15:03:23.411108  392791 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 15:03:23.411540  392791 main.go:141] libmachine: () Calling .GetMachineName
	I1008 15:03:23.411747  392791 main.go:141] libmachine: (test-preload-105038) Calling .DriverName
	I1008 15:03:23.413548  392791 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1008 15:03:23.414842  392791 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 15:03:23.415184  392791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 15:03:23.415233  392791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 15:03:23.429534  392791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36849
	I1008 15:03:23.430069  392791 main.go:141] libmachine: () Calling .GetVersion
	I1008 15:03:23.430565  392791 main.go:141] libmachine: Using API Version  1
	I1008 15:03:23.430590  392791 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 15:03:23.430988  392791 main.go:141] libmachine: () Calling .GetMachineName
	I1008 15:03:23.431203  392791 main.go:141] libmachine: (test-preload-105038) Calling .DriverName
	I1008 15:03:23.465804  392791 out.go:179] * Using the kvm2 driver based on existing profile
	I1008 15:03:23.466763  392791 start.go:305] selected driver: kvm2
	I1008 15:03:23.466777  392791 start.go:925] validating driver "kvm2" against &{Name:test-preload-105038 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-105038 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:03:23.466880  392791 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 15:03:23.467566  392791 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 15:03:23.467650  392791 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21681-357044/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 15:03:23.481978  392791 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1008 15:03:23.482023  392791 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21681-357044/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 15:03:23.496913  392791 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1008 15:03:23.497272  392791 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 15:03:23.497301  392791 cni.go:84] Creating CNI manager for ""
	I1008 15:03:23.497342  392791 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 15:03:23.497434  392791 start.go:349] cluster config:
	{Name:test-preload-105038 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-105038 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:03:23.497553  392791 iso.go:125] acquiring lock: {Name:mkaa45da6237a5a16f5f1d676ea2e57ba969b9e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 15:03:23.499192  392791 out.go:179] * Starting "test-preload-105038" primary control-plane node in "test-preload-105038" cluster
	I1008 15:03:23.500166  392791 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1008 15:03:23.888835  392791 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1008 15:03:23.888873  392791 cache.go:58] Caching tarball of preloaded images
	I1008 15:03:23.889073  392791 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1008 15:03:23.890513  392791 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1008 15:03:23.891515  392791 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1008 15:03:23.996762  392791 preload.go:290] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1008 15:03:23.996824  392791 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21681-357044/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1008 15:03:33.342138  392791 cache.go:61] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1008 15:03:33.342408  392791 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/test-preload-105038/config.json ...
	I1008 15:03:33.342697  392791 start.go:360] acquireMachinesLock for test-preload-105038: {Name:mka12a7774d0aa7dccf7190e47a0dc3a854191d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 15:03:33.342765  392791 start.go:364] duration metric: took 43.222µs to acquireMachinesLock for "test-preload-105038"
	I1008 15:03:33.342779  392791 start.go:96] Skipping create...Using existing machine configuration
	I1008 15:03:33.342788  392791 fix.go:54] fixHost starting: 
	I1008 15:03:33.343097  392791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 15:03:33.343135  392791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 15:03:33.357065  392791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34541
	I1008 15:03:33.357655  392791 main.go:141] libmachine: () Calling .GetVersion
	I1008 15:03:33.358197  392791 main.go:141] libmachine: Using API Version  1
	I1008 15:03:33.358222  392791 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 15:03:33.358622  392791 main.go:141] libmachine: () Calling .GetMachineName
	I1008 15:03:33.358829  392791 main.go:141] libmachine: (test-preload-105038) Calling .DriverName
	I1008 15:03:33.359010  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetState
	I1008 15:03:33.360957  392791 fix.go:112] recreateIfNeeded on test-preload-105038: state=Stopped err=<nil>
	I1008 15:03:33.361008  392791 main.go:141] libmachine: (test-preload-105038) Calling .DriverName
	W1008 15:03:33.361215  392791 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 15:03:33.363418  392791 out.go:252] * Restarting existing kvm2 VM for "test-preload-105038" ...
	I1008 15:03:33.363451  392791 main.go:141] libmachine: (test-preload-105038) Calling .Start
	I1008 15:03:33.363698  392791 main.go:141] libmachine: (test-preload-105038) starting domain...
	I1008 15:03:33.363741  392791 main.go:141] libmachine: (test-preload-105038) ensuring networks are active...
	I1008 15:03:33.364612  392791 main.go:141] libmachine: (test-preload-105038) Ensuring network default is active
	I1008 15:03:33.365117  392791 main.go:141] libmachine: (test-preload-105038) Ensuring network mk-test-preload-105038 is active
	I1008 15:03:33.365697  392791 main.go:141] libmachine: (test-preload-105038) getting domain XML...
	I1008 15:03:33.366917  392791 main.go:141] libmachine: (test-preload-105038) DBG | starting domain XML:
	I1008 15:03:33.366933  392791 main.go:141] libmachine: (test-preload-105038) DBG | <domain type='kvm'>
	I1008 15:03:33.366941  392791 main.go:141] libmachine: (test-preload-105038) DBG |   <name>test-preload-105038</name>
	I1008 15:03:33.366946  392791 main.go:141] libmachine: (test-preload-105038) DBG |   <uuid>cd7f5ae2-f1df-4ff8-a62b-997418920180</uuid>
	I1008 15:03:33.366978  392791 main.go:141] libmachine: (test-preload-105038) DBG |   <memory unit='KiB'>3145728</memory>
	I1008 15:03:33.367004  392791 main.go:141] libmachine: (test-preload-105038) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1008 15:03:33.367011  392791 main.go:141] libmachine: (test-preload-105038) DBG |   <vcpu placement='static'>2</vcpu>
	I1008 15:03:33.367018  392791 main.go:141] libmachine: (test-preload-105038) DBG |   <os>
	I1008 15:03:33.367026  392791 main.go:141] libmachine: (test-preload-105038) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1008 15:03:33.367032  392791 main.go:141] libmachine: (test-preload-105038) DBG |     <boot dev='cdrom'/>
	I1008 15:03:33.367038  392791 main.go:141] libmachine: (test-preload-105038) DBG |     <boot dev='hd'/>
	I1008 15:03:33.367043  392791 main.go:141] libmachine: (test-preload-105038) DBG |     <bootmenu enable='no'/>
	I1008 15:03:33.367055  392791 main.go:141] libmachine: (test-preload-105038) DBG |   </os>
	I1008 15:03:33.367063  392791 main.go:141] libmachine: (test-preload-105038) DBG |   <features>
	I1008 15:03:33.367072  392791 main.go:141] libmachine: (test-preload-105038) DBG |     <acpi/>
	I1008 15:03:33.367080  392791 main.go:141] libmachine: (test-preload-105038) DBG |     <apic/>
	I1008 15:03:33.367089  392791 main.go:141] libmachine: (test-preload-105038) DBG |     <pae/>
	I1008 15:03:33.367097  392791 main.go:141] libmachine: (test-preload-105038) DBG |   </features>
	I1008 15:03:33.367108  392791 main.go:141] libmachine: (test-preload-105038) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1008 15:03:33.367113  392791 main.go:141] libmachine: (test-preload-105038) DBG |   <clock offset='utc'/>
	I1008 15:03:33.367158  392791 main.go:141] libmachine: (test-preload-105038) DBG |   <on_poweroff>destroy</on_poweroff>
	I1008 15:03:33.367186  392791 main.go:141] libmachine: (test-preload-105038) DBG |   <on_reboot>restart</on_reboot>
	I1008 15:03:33.367197  392791 main.go:141] libmachine: (test-preload-105038) DBG |   <on_crash>destroy</on_crash>
	I1008 15:03:33.367214  392791 main.go:141] libmachine: (test-preload-105038) DBG |   <devices>
	I1008 15:03:33.367229  392791 main.go:141] libmachine: (test-preload-105038) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1008 15:03:33.367238  392791 main.go:141] libmachine: (test-preload-105038) DBG |     <disk type='file' device='cdrom'>
	I1008 15:03:33.367250  392791 main.go:141] libmachine: (test-preload-105038) DBG |       <driver name='qemu' type='raw'/>
	I1008 15:03:33.367263  392791 main.go:141] libmachine: (test-preload-105038) DBG |       <source file='/home/jenkins/minikube-integration/21681-357044/.minikube/machines/test-preload-105038/boot2docker.iso'/>
	I1008 15:03:33.367275  392791 main.go:141] libmachine: (test-preload-105038) DBG |       <target dev='hdc' bus='scsi'/>
	I1008 15:03:33.367286  392791 main.go:141] libmachine: (test-preload-105038) DBG |       <readonly/>
	I1008 15:03:33.367303  392791 main.go:141] libmachine: (test-preload-105038) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1008 15:03:33.367314  392791 main.go:141] libmachine: (test-preload-105038) DBG |     </disk>
	I1008 15:03:33.367324  392791 main.go:141] libmachine: (test-preload-105038) DBG |     <disk type='file' device='disk'>
	I1008 15:03:33.367341  392791 main.go:141] libmachine: (test-preload-105038) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1008 15:03:33.367371  392791 main.go:141] libmachine: (test-preload-105038) DBG |       <source file='/home/jenkins/minikube-integration/21681-357044/.minikube/machines/test-preload-105038/test-preload-105038.rawdisk'/>
	I1008 15:03:33.367388  392791 main.go:141] libmachine: (test-preload-105038) DBG |       <target dev='hda' bus='virtio'/>
	I1008 15:03:33.367425  392791 main.go:141] libmachine: (test-preload-105038) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1008 15:03:33.367447  392791 main.go:141] libmachine: (test-preload-105038) DBG |     </disk>
	I1008 15:03:33.367459  392791 main.go:141] libmachine: (test-preload-105038) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1008 15:03:33.367475  392791 main.go:141] libmachine: (test-preload-105038) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1008 15:03:33.367485  392791 main.go:141] libmachine: (test-preload-105038) DBG |     </controller>
	I1008 15:03:33.367497  392791 main.go:141] libmachine: (test-preload-105038) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1008 15:03:33.367511  392791 main.go:141] libmachine: (test-preload-105038) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1008 15:03:33.367523  392791 main.go:141] libmachine: (test-preload-105038) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1008 15:03:33.367535  392791 main.go:141] libmachine: (test-preload-105038) DBG |     </controller>
	I1008 15:03:33.367546  392791 main.go:141] libmachine: (test-preload-105038) DBG |     <interface type='network'>
	I1008 15:03:33.367562  392791 main.go:141] libmachine: (test-preload-105038) DBG |       <mac address='52:54:00:2b:03:bd'/>
	I1008 15:03:33.367580  392791 main.go:141] libmachine: (test-preload-105038) DBG |       <source network='mk-test-preload-105038'/>
	I1008 15:03:33.367595  392791 main.go:141] libmachine: (test-preload-105038) DBG |       <model type='virtio'/>
	I1008 15:03:33.367614  392791 main.go:141] libmachine: (test-preload-105038) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1008 15:03:33.367627  392791 main.go:141] libmachine: (test-preload-105038) DBG |     </interface>
	I1008 15:03:33.367637  392791 main.go:141] libmachine: (test-preload-105038) DBG |     <interface type='network'>
	I1008 15:03:33.367647  392791 main.go:141] libmachine: (test-preload-105038) DBG |       <mac address='52:54:00:b3:98:f5'/>
	I1008 15:03:33.367657  392791 main.go:141] libmachine: (test-preload-105038) DBG |       <source network='default'/>
	I1008 15:03:33.367667  392791 main.go:141] libmachine: (test-preload-105038) DBG |       <model type='virtio'/>
	I1008 15:03:33.367676  392791 main.go:141] libmachine: (test-preload-105038) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1008 15:03:33.367690  392791 main.go:141] libmachine: (test-preload-105038) DBG |     </interface>
	I1008 15:03:33.367706  392791 main.go:141] libmachine: (test-preload-105038) DBG |     <serial type='pty'>
	I1008 15:03:33.367721  392791 main.go:141] libmachine: (test-preload-105038) DBG |       <target type='isa-serial' port='0'>
	I1008 15:03:33.367737  392791 main.go:141] libmachine: (test-preload-105038) DBG |         <model name='isa-serial'/>
	I1008 15:03:33.367769  392791 main.go:141] libmachine: (test-preload-105038) DBG |       </target>
	I1008 15:03:33.367785  392791 main.go:141] libmachine: (test-preload-105038) DBG |     </serial>
	I1008 15:03:33.367799  392791 main.go:141] libmachine: (test-preload-105038) DBG |     <console type='pty'>
	I1008 15:03:33.367810  392791 main.go:141] libmachine: (test-preload-105038) DBG |       <target type='serial' port='0'/>
	I1008 15:03:33.367820  392791 main.go:141] libmachine: (test-preload-105038) DBG |     </console>
	I1008 15:03:33.367828  392791 main.go:141] libmachine: (test-preload-105038) DBG |     <input type='mouse' bus='ps2'/>
	I1008 15:03:33.367840  392791 main.go:141] libmachine: (test-preload-105038) DBG |     <input type='keyboard' bus='ps2'/>
	I1008 15:03:33.367850  392791 main.go:141] libmachine: (test-preload-105038) DBG |     <audio id='1' type='none'/>
	I1008 15:03:33.367865  392791 main.go:141] libmachine: (test-preload-105038) DBG |     <memballoon model='virtio'>
	I1008 15:03:33.367881  392791 main.go:141] libmachine: (test-preload-105038) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1008 15:03:33.367891  392791 main.go:141] libmachine: (test-preload-105038) DBG |     </memballoon>
	I1008 15:03:33.367902  392791 main.go:141] libmachine: (test-preload-105038) DBG |     <rng model='virtio'>
	I1008 15:03:33.367933  392791 main.go:141] libmachine: (test-preload-105038) DBG |       <backend model='random'>/dev/random</backend>
	I1008 15:03:33.367949  392791 main.go:141] libmachine: (test-preload-105038) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1008 15:03:33.367957  392791 main.go:141] libmachine: (test-preload-105038) DBG |     </rng>
	I1008 15:03:33.367964  392791 main.go:141] libmachine: (test-preload-105038) DBG |   </devices>
	I1008 15:03:33.367976  392791 main.go:141] libmachine: (test-preload-105038) DBG | </domain>
	I1008 15:03:33.367986  392791 main.go:141] libmachine: (test-preload-105038) DBG | 
	I1008 15:03:34.677807  392791 main.go:141] libmachine: (test-preload-105038) waiting for domain to start...
	I1008 15:03:34.679168  392791 main.go:141] libmachine: (test-preload-105038) domain is now running
	I1008 15:03:34.679195  392791 main.go:141] libmachine: (test-preload-105038) waiting for IP...
	I1008 15:03:34.679998  392791 main.go:141] libmachine: (test-preload-105038) DBG | domain test-preload-105038 has defined MAC address 52:54:00:2b:03:bd in network mk-test-preload-105038
	I1008 15:03:34.680517  392791 main.go:141] libmachine: (test-preload-105038) found domain IP: 192.168.39.121
	I1008 15:03:34.680548  392791 main.go:141] libmachine: (test-preload-105038) DBG | domain test-preload-105038 has current primary IP address 192.168.39.121 and MAC address 52:54:00:2b:03:bd in network mk-test-preload-105038
	I1008 15:03:34.680557  392791 main.go:141] libmachine: (test-preload-105038) reserving static IP address...
	I1008 15:03:34.681071  392791 main.go:141] libmachine: (test-preload-105038) DBG | found host DHCP lease matching {name: "test-preload-105038", mac: "52:54:00:2b:03:bd", ip: "192.168.39.121"} in network mk-test-preload-105038: {Iface:virbr1 ExpiryTime:2025-10-08 16:01:53 +0000 UTC Type:0 Mac:52:54:00:2b:03:bd Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:test-preload-105038 Clientid:01:52:54:00:2b:03:bd}
	I1008 15:03:34.681104  392791 main.go:141] libmachine: (test-preload-105038) reserved static IP address 192.168.39.121 for domain test-preload-105038
	I1008 15:03:34.681130  392791 main.go:141] libmachine: (test-preload-105038) DBG | skip adding static IP to network mk-test-preload-105038 - found existing host DHCP lease matching {name: "test-preload-105038", mac: "52:54:00:2b:03:bd", ip: "192.168.39.121"}
	I1008 15:03:34.681163  392791 main.go:141] libmachine: (test-preload-105038) DBG | Getting to WaitForSSH function...
	I1008 15:03:34.681181  392791 main.go:141] libmachine: (test-preload-105038) waiting for SSH...
	I1008 15:03:34.683387  392791 main.go:141] libmachine: (test-preload-105038) DBG | domain test-preload-105038 has defined MAC address 52:54:00:2b:03:bd in network mk-test-preload-105038
	I1008 15:03:34.683729  392791 main.go:141] libmachine: (test-preload-105038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:03:bd", ip: ""} in network mk-test-preload-105038: {Iface:virbr1 ExpiryTime:2025-10-08 16:01:53 +0000 UTC Type:0 Mac:52:54:00:2b:03:bd Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:test-preload-105038 Clientid:01:52:54:00:2b:03:bd}
	I1008 15:03:34.683769  392791 main.go:141] libmachine: (test-preload-105038) DBG | domain test-preload-105038 has defined IP address 192.168.39.121 and MAC address 52:54:00:2b:03:bd in network mk-test-preload-105038
	I1008 15:03:34.683904  392791 main.go:141] libmachine: (test-preload-105038) DBG | Using SSH client type: external
	I1008 15:03:34.683928  392791 main.go:141] libmachine: (test-preload-105038) DBG | Using SSH private key: /home/jenkins/minikube-integration/21681-357044/.minikube/machines/test-preload-105038/id_rsa (-rw-------)
	I1008 15:03:34.683974  392791 main.go:141] libmachine: (test-preload-105038) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.121 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21681-357044/.minikube/machines/test-preload-105038/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 15:03:34.684002  392791 main.go:141] libmachine: (test-preload-105038) DBG | About to run SSH command:
	I1008 15:03:34.684014  392791 main.go:141] libmachine: (test-preload-105038) DBG | exit 0
	I1008 15:03:45.961436  392791 main.go:141] libmachine: (test-preload-105038) DBG | SSH cmd err, output: exit status 255: 
	I1008 15:03:45.961471  392791 main.go:141] libmachine: (test-preload-105038) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1008 15:03:45.961483  392791 main.go:141] libmachine: (test-preload-105038) DBG | command : exit 0
	I1008 15:03:45.961490  392791 main.go:141] libmachine: (test-preload-105038) DBG | err     : exit status 255
	I1008 15:03:45.961501  392791 main.go:141] libmachine: (test-preload-105038) DBG | output  : 
	I1008 15:03:48.962118  392791 main.go:141] libmachine: (test-preload-105038) DBG | Getting to WaitForSSH function...
	I1008 15:03:48.965150  392791 main.go:141] libmachine: (test-preload-105038) DBG | domain test-preload-105038 has defined MAC address 52:54:00:2b:03:bd in network mk-test-preload-105038
	I1008 15:03:48.965697  392791 main.go:141] libmachine: (test-preload-105038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:03:bd", ip: ""} in network mk-test-preload-105038: {Iface:virbr1 ExpiryTime:2025-10-08 16:03:45 +0000 UTC Type:0 Mac:52:54:00:2b:03:bd Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:test-preload-105038 Clientid:01:52:54:00:2b:03:bd}
	I1008 15:03:48.965738  392791 main.go:141] libmachine: (test-preload-105038) DBG | domain test-preload-105038 has defined IP address 192.168.39.121 and MAC address 52:54:00:2b:03:bd in network mk-test-preload-105038
	I1008 15:03:48.965855  392791 main.go:141] libmachine: (test-preload-105038) DBG | Using SSH client type: external
	I1008 15:03:48.965885  392791 main.go:141] libmachine: (test-preload-105038) DBG | Using SSH private key: /home/jenkins/minikube-integration/21681-357044/.minikube/machines/test-preload-105038/id_rsa (-rw-------)
	I1008 15:03:48.965936  392791 main.go:141] libmachine: (test-preload-105038) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.121 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21681-357044/.minikube/machines/test-preload-105038/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1008 15:03:48.965951  392791 main.go:141] libmachine: (test-preload-105038) DBG | About to run SSH command:
	I1008 15:03:48.965974  392791 main.go:141] libmachine: (test-preload-105038) DBG | exit 0
	I1008 15:03:49.094709  392791 main.go:141] libmachine: (test-preload-105038) DBG | SSH cmd err, output: <nil>: 
	I1008 15:03:49.095206  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetConfigRaw
	I1008 15:03:49.095909  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetIP
	I1008 15:03:49.099076  392791 main.go:141] libmachine: (test-preload-105038) DBG | domain test-preload-105038 has defined MAC address 52:54:00:2b:03:bd in network mk-test-preload-105038
	I1008 15:03:49.099506  392791 main.go:141] libmachine: (test-preload-105038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:03:bd", ip: ""} in network mk-test-preload-105038: {Iface:virbr1 ExpiryTime:2025-10-08 16:03:45 +0000 UTC Type:0 Mac:52:54:00:2b:03:bd Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:test-preload-105038 Clientid:01:52:54:00:2b:03:bd}
	I1008 15:03:49.099536  392791 main.go:141] libmachine: (test-preload-105038) DBG | domain test-preload-105038 has defined IP address 192.168.39.121 and MAC address 52:54:00:2b:03:bd in network mk-test-preload-105038
	I1008 15:03:49.100042  392791 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/test-preload-105038/config.json ...
	I1008 15:03:49.100381  392791 machine.go:93] provisionDockerMachine start ...
	I1008 15:03:49.100409  392791 main.go:141] libmachine: (test-preload-105038) Calling .DriverName
	I1008 15:03:49.100680  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHHostname
	I1008 15:03:49.103576  392791 main.go:141] libmachine: (test-preload-105038) DBG | domain test-preload-105038 has defined MAC address 52:54:00:2b:03:bd in network mk-test-preload-105038
	I1008 15:03:49.103942  392791 main.go:141] libmachine: (test-preload-105038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:03:bd", ip: ""} in network mk-test-preload-105038: {Iface:virbr1 ExpiryTime:2025-10-08 16:03:45 +0000 UTC Type:0 Mac:52:54:00:2b:03:bd Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:test-preload-105038 Clientid:01:52:54:00:2b:03:bd}
	I1008 15:03:49.103967  392791 main.go:141] libmachine: (test-preload-105038) DBG | domain test-preload-105038 has defined IP address 192.168.39.121 and MAC address 52:54:00:2b:03:bd in network mk-test-preload-105038
	I1008 15:03:49.104097  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHPort
	I1008 15:03:49.104330  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHKeyPath
	I1008 15:03:49.104535  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHKeyPath
	I1008 15:03:49.104700  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHUsername
	I1008 15:03:49.104851  392791 main.go:141] libmachine: Using SSH client type: native
	I1008 15:03:49.105116  392791 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1008 15:03:49.105129  392791 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 15:03:49.210637  392791 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1008 15:03:49.210671  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetMachineName
	I1008 15:03:49.210956  392791 buildroot.go:166] provisioning hostname "test-preload-105038"
	I1008 15:03:49.210981  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetMachineName
	I1008 15:03:49.211173  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHHostname
	I1008 15:03:49.214763  392791 main.go:141] libmachine: (test-preload-105038) DBG | domain test-preload-105038 has defined MAC address 52:54:00:2b:03:bd in network mk-test-preload-105038
	I1008 15:03:49.215247  392791 main.go:141] libmachine: (test-preload-105038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:03:bd", ip: ""} in network mk-test-preload-105038: {Iface:virbr1 ExpiryTime:2025-10-08 16:03:45 +0000 UTC Type:0 Mac:52:54:00:2b:03:bd Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:test-preload-105038 Clientid:01:52:54:00:2b:03:bd}
	I1008 15:03:49.215281  392791 main.go:141] libmachine: (test-preload-105038) DBG | domain test-preload-105038 has defined IP address 192.168.39.121 and MAC address 52:54:00:2b:03:bd in network mk-test-preload-105038
	I1008 15:03:49.215489  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHPort
	I1008 15:03:49.215724  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHKeyPath
	I1008 15:03:49.215899  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHKeyPath
	I1008 15:03:49.216119  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHUsername
	I1008 15:03:49.216376  392791 main.go:141] libmachine: Using SSH client type: native
	I1008 15:03:49.216635  392791 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1008 15:03:49.216652  392791 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-105038 && echo "test-preload-105038" | sudo tee /etc/hostname
	I1008 15:03:49.341545  392791 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-105038
	
	I1008 15:03:49.341582  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHHostname
	I1008 15:03:49.345333  392791 main.go:141] libmachine: (test-preload-105038) DBG | domain test-preload-105038 has defined MAC address 52:54:00:2b:03:bd in network mk-test-preload-105038
	I1008 15:03:49.345914  392791 main.go:141] libmachine: (test-preload-105038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:03:bd", ip: ""} in network mk-test-preload-105038: {Iface:virbr1 ExpiryTime:2025-10-08 16:03:45 +0000 UTC Type:0 Mac:52:54:00:2b:03:bd Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:test-preload-105038 Clientid:01:52:54:00:2b:03:bd}
	I1008 15:03:49.345941  392791 main.go:141] libmachine: (test-preload-105038) DBG | domain test-preload-105038 has defined IP address 192.168.39.121 and MAC address 52:54:00:2b:03:bd in network mk-test-preload-105038
	I1008 15:03:49.346173  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHPort
	I1008 15:03:49.346418  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHKeyPath
	I1008 15:03:49.346625  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHKeyPath
	I1008 15:03:49.346807  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHUsername
	I1008 15:03:49.347020  392791 main.go:141] libmachine: Using SSH client type: native
	I1008 15:03:49.347232  392791 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1008 15:03:49.347249  392791 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-105038' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-105038/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-105038' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 15:03:49.463985  392791 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 15:03:49.464051  392791 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21681-357044/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-357044/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-357044/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-357044/.minikube}
	I1008 15:03:49.464091  392791 buildroot.go:174] setting up certificates
	I1008 15:03:49.464109  392791 provision.go:84] configureAuth start
	I1008 15:03:49.464129  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetMachineName
	I1008 15:03:49.464670  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetIP
	I1008 15:03:49.468120  392791 main.go:141] libmachine: (test-preload-105038) DBG | domain test-preload-105038 has defined MAC address 52:54:00:2b:03:bd in network mk-test-preload-105038
	I1008 15:03:49.468543  392791 main.go:141] libmachine: (test-preload-105038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:03:bd", ip: ""} in network mk-test-preload-105038: {Iface:virbr1 ExpiryTime:2025-10-08 16:03:45 +0000 UTC Type:0 Mac:52:54:00:2b:03:bd Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:test-preload-105038 Clientid:01:52:54:00:2b:03:bd}
	I1008 15:03:49.468572  392791 main.go:141] libmachine: (test-preload-105038) DBG | domain test-preload-105038 has defined IP address 192.168.39.121 and MAC address 52:54:00:2b:03:bd in network mk-test-preload-105038
	I1008 15:03:49.468775  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHHostname
	I1008 15:03:49.471376  392791 main.go:141] libmachine: (test-preload-105038) DBG | domain test-preload-105038 has defined MAC address 52:54:00:2b:03:bd in network mk-test-preload-105038
	I1008 15:03:49.471825  392791 main.go:141] libmachine: (test-preload-105038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:03:bd", ip: ""} in network mk-test-preload-105038: {Iface:virbr1 ExpiryTime:2025-10-08 16:03:45 +0000 UTC Type:0 Mac:52:54:00:2b:03:bd Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:test-preload-105038 Clientid:01:52:54:00:2b:03:bd}
	I1008 15:03:49.471868  392791 main.go:141] libmachine: (test-preload-105038) DBG | domain test-preload-105038 has defined IP address 192.168.39.121 and MAC address 52:54:00:2b:03:bd in network mk-test-preload-105038
	I1008 15:03:49.472034  392791 provision.go:143] copyHostCerts
	I1008 15:03:49.472101  392791 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-357044/.minikube/ca.pem, removing ...
	I1008 15:03:49.472121  392791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-357044/.minikube/ca.pem
	I1008 15:03:49.472193  392791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-357044/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-357044/.minikube/ca.pem (1082 bytes)
	I1008 15:03:49.472387  392791 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-357044/.minikube/cert.pem, removing ...
	I1008 15:03:49.472401  392791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-357044/.minikube/cert.pem
	I1008 15:03:49.472437  392791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-357044/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-357044/.minikube/cert.pem (1123 bytes)
	I1008 15:03:49.472522  392791 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-357044/.minikube/key.pem, removing ...
	I1008 15:03:49.472530  392791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-357044/.minikube/key.pem
	I1008 15:03:49.472555  392791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-357044/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-357044/.minikube/key.pem (1675 bytes)
	I1008 15:03:49.472629  392791 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-357044/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-357044/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-357044/.minikube/certs/ca-key.pem org=jenkins.test-preload-105038 san=[127.0.0.1 192.168.39.121 localhost minikube test-preload-105038]
	I1008 15:03:49.540863  392791 provision.go:177] copyRemoteCerts
	I1008 15:03:49.540938  392791 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 15:03:49.540965  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHHostname
	I1008 15:03:49.544197  392791 main.go:141] libmachine: (test-preload-105038) DBG | domain test-preload-105038 has defined MAC address 52:54:00:2b:03:bd in network mk-test-preload-105038
	I1008 15:03:49.544592  392791 main.go:141] libmachine: (test-preload-105038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:03:bd", ip: ""} in network mk-test-preload-105038: {Iface:virbr1 ExpiryTime:2025-10-08 16:03:45 +0000 UTC Type:0 Mac:52:54:00:2b:03:bd Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:test-preload-105038 Clientid:01:52:54:00:2b:03:bd}
	I1008 15:03:49.544646  392791 main.go:141] libmachine: (test-preload-105038) DBG | domain test-preload-105038 has defined IP address 192.168.39.121 and MAC address 52:54:00:2b:03:bd in network mk-test-preload-105038
	I1008 15:03:49.544827  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHPort
	I1008 15:03:49.545050  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHKeyPath
	I1008 15:03:49.545234  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHUsername
	I1008 15:03:49.545408  392791 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/test-preload-105038/id_rsa Username:docker}
	I1008 15:03:49.630779  392791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 15:03:49.662458  392791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1008 15:03:49.695699  392791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 15:03:49.727835  392791 provision.go:87] duration metric: took 263.690719ms to configureAuth
	I1008 15:03:49.727871  392791 buildroot.go:189] setting minikube options for container-runtime
	I1008 15:03:49.728069  392791 config.go:182] Loaded profile config "test-preload-105038": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1008 15:03:49.728156  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHHostname
	I1008 15:03:49.731683  392791 main.go:141] libmachine: (test-preload-105038) DBG | domain test-preload-105038 has defined MAC address 52:54:00:2b:03:bd in network mk-test-preload-105038
	I1008 15:03:49.732240  392791 main.go:141] libmachine: (test-preload-105038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:03:bd", ip: ""} in network mk-test-preload-105038: {Iface:virbr1 ExpiryTime:2025-10-08 16:03:45 +0000 UTC Type:0 Mac:52:54:00:2b:03:bd Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:test-preload-105038 Clientid:01:52:54:00:2b:03:bd}
	I1008 15:03:49.732274  392791 main.go:141] libmachine: (test-preload-105038) DBG | domain test-preload-105038 has defined IP address 192.168.39.121 and MAC address 52:54:00:2b:03:bd in network mk-test-preload-105038
	I1008 15:03:49.732566  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHPort
	I1008 15:03:49.732779  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHKeyPath
	I1008 15:03:49.732972  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHKeyPath
	I1008 15:03:49.733102  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHUsername
	I1008 15:03:49.733311  392791 main.go:141] libmachine: Using SSH client type: native
	I1008 15:03:49.733582  392791 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1008 15:03:49.733628  392791 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 15:03:49.984569  392791 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 15:03:49.984597  392791 machine.go:96] duration metric: took 884.198086ms to provisionDockerMachine
	I1008 15:03:49.984610  392791 start.go:293] postStartSetup for "test-preload-105038" (driver="kvm2")
	I1008 15:03:49.984619  392791 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 15:03:49.984667  392791 main.go:141] libmachine: (test-preload-105038) Calling .DriverName
	I1008 15:03:49.985036  392791 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 15:03:49.985078  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHHostname
	I1008 15:03:49.988373  392791 main.go:141] libmachine: (test-preload-105038) DBG | domain test-preload-105038 has defined MAC address 52:54:00:2b:03:bd in network mk-test-preload-105038
	I1008 15:03:49.988771  392791 main.go:141] libmachine: (test-preload-105038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:03:bd", ip: ""} in network mk-test-preload-105038: {Iface:virbr1 ExpiryTime:2025-10-08 16:03:45 +0000 UTC Type:0 Mac:52:54:00:2b:03:bd Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:test-preload-105038 Clientid:01:52:54:00:2b:03:bd}
	I1008 15:03:49.988799  392791 main.go:141] libmachine: (test-preload-105038) DBG | domain test-preload-105038 has defined IP address 192.168.39.121 and MAC address 52:54:00:2b:03:bd in network mk-test-preload-105038
	I1008 15:03:49.988890  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHPort
	I1008 15:03:49.989112  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHKeyPath
	I1008 15:03:49.989312  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHUsername
	I1008 15:03:49.989482  392791 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/test-preload-105038/id_rsa Username:docker}
	I1008 15:03:50.075007  392791 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 15:03:50.080428  392791 info.go:137] Remote host: Buildroot 2025.02
	I1008 15:03:50.080459  392791 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-357044/.minikube/addons for local assets ...
	I1008 15:03:50.080555  392791 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-357044/.minikube/files for local assets ...
	I1008 15:03:50.080674  392791 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-357044/.minikube/files/etc/ssl/certs/3619152.pem -> 3619152.pem in /etc/ssl/certs
	I1008 15:03:50.080815  392791 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 15:03:50.095689  392791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/files/etc/ssl/certs/3619152.pem --> /etc/ssl/certs/3619152.pem (1708 bytes)
	I1008 15:03:50.132745  392791 start.go:296] duration metric: took 148.120416ms for postStartSetup
	I1008 15:03:50.132797  392791 fix.go:56] duration metric: took 16.790008089s for fixHost
	I1008 15:03:50.132826  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHHostname
	I1008 15:03:50.136457  392791 main.go:141] libmachine: (test-preload-105038) DBG | domain test-preload-105038 has defined MAC address 52:54:00:2b:03:bd in network mk-test-preload-105038
	I1008 15:03:50.136973  392791 main.go:141] libmachine: (test-preload-105038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:03:bd", ip: ""} in network mk-test-preload-105038: {Iface:virbr1 ExpiryTime:2025-10-08 16:03:45 +0000 UTC Type:0 Mac:52:54:00:2b:03:bd Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:test-preload-105038 Clientid:01:52:54:00:2b:03:bd}
	I1008 15:03:50.136999  392791 main.go:141] libmachine: (test-preload-105038) DBG | domain test-preload-105038 has defined IP address 192.168.39.121 and MAC address 52:54:00:2b:03:bd in network mk-test-preload-105038
	I1008 15:03:50.137196  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHPort
	I1008 15:03:50.137504  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHKeyPath
	I1008 15:03:50.137785  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHKeyPath
	I1008 15:03:50.137971  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHUsername
	I1008 15:03:50.138155  392791 main.go:141] libmachine: Using SSH client type: native
	I1008 15:03:50.138377  392791 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1008 15:03:50.138388  392791 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 15:03:50.244044  392791 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759935830.206523128
	
	I1008 15:03:50.244076  392791 fix.go:216] guest clock: 1759935830.206523128
	I1008 15:03:50.244085  392791 fix.go:229] Guest: 2025-10-08 15:03:50.206523128 +0000 UTC Remote: 2025-10-08 15:03:50.132802682 +0000 UTC m=+26.794699593 (delta=73.720446ms)
	I1008 15:03:50.244106  392791 fix.go:200] guest clock delta is within tolerance: 73.720446ms
	I1008 15:03:50.244112  392791 start.go:83] releasing machines lock for "test-preload-105038", held for 16.901339267s
	I1008 15:03:50.244131  392791 main.go:141] libmachine: (test-preload-105038) Calling .DriverName
	I1008 15:03:50.244487  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetIP
	I1008 15:03:50.247775  392791 main.go:141] libmachine: (test-preload-105038) DBG | domain test-preload-105038 has defined MAC address 52:54:00:2b:03:bd in network mk-test-preload-105038
	I1008 15:03:50.248234  392791 main.go:141] libmachine: (test-preload-105038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:03:bd", ip: ""} in network mk-test-preload-105038: {Iface:virbr1 ExpiryTime:2025-10-08 16:03:45 +0000 UTC Type:0 Mac:52:54:00:2b:03:bd Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:test-preload-105038 Clientid:01:52:54:00:2b:03:bd}
	I1008 15:03:50.248265  392791 main.go:141] libmachine: (test-preload-105038) DBG | domain test-preload-105038 has defined IP address 192.168.39.121 and MAC address 52:54:00:2b:03:bd in network mk-test-preload-105038
	I1008 15:03:50.248506  392791 main.go:141] libmachine: (test-preload-105038) Calling .DriverName
	I1008 15:03:50.249084  392791 main.go:141] libmachine: (test-preload-105038) Calling .DriverName
	I1008 15:03:50.249271  392791 main.go:141] libmachine: (test-preload-105038) Calling .DriverName
	I1008 15:03:50.249397  392791 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 15:03:50.249454  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHHostname
	I1008 15:03:50.249533  392791 ssh_runner.go:195] Run: cat /version.json
	I1008 15:03:50.249563  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHHostname
	I1008 15:03:50.252893  392791 main.go:141] libmachine: (test-preload-105038) DBG | domain test-preload-105038 has defined MAC address 52:54:00:2b:03:bd in network mk-test-preload-105038
	I1008 15:03:50.252912  392791 main.go:141] libmachine: (test-preload-105038) DBG | domain test-preload-105038 has defined MAC address 52:54:00:2b:03:bd in network mk-test-preload-105038
	I1008 15:03:50.253437  392791 main.go:141] libmachine: (test-preload-105038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:03:bd", ip: ""} in network mk-test-preload-105038: {Iface:virbr1 ExpiryTime:2025-10-08 16:03:45 +0000 UTC Type:0 Mac:52:54:00:2b:03:bd Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:test-preload-105038 Clientid:01:52:54:00:2b:03:bd}
	I1008 15:03:50.253464  392791 main.go:141] libmachine: (test-preload-105038) DBG | domain test-preload-105038 has defined IP address 192.168.39.121 and MAC address 52:54:00:2b:03:bd in network mk-test-preload-105038
	I1008 15:03:50.253516  392791 main.go:141] libmachine: (test-preload-105038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:03:bd", ip: ""} in network mk-test-preload-105038: {Iface:virbr1 ExpiryTime:2025-10-08 16:03:45 +0000 UTC Type:0 Mac:52:54:00:2b:03:bd Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:test-preload-105038 Clientid:01:52:54:00:2b:03:bd}
	I1008 15:03:50.253544  392791 main.go:141] libmachine: (test-preload-105038) DBG | domain test-preload-105038 has defined IP address 192.168.39.121 and MAC address 52:54:00:2b:03:bd in network mk-test-preload-105038
	I1008 15:03:50.253726  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHPort
	I1008 15:03:50.253863  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHPort
	I1008 15:03:50.253994  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHKeyPath
	I1008 15:03:50.254077  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHKeyPath
	I1008 15:03:50.254162  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHUsername
	I1008 15:03:50.254220  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHUsername
	I1008 15:03:50.254287  392791 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/test-preload-105038/id_rsa Username:docker}
	I1008 15:03:50.254316  392791 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/test-preload-105038/id_rsa Username:docker}
	I1008 15:03:50.365754  392791 ssh_runner.go:195] Run: systemctl --version
	I1008 15:03:50.372337  392791 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 15:03:50.533682  392791 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 15:03:50.541246  392791 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 15:03:50.541333  392791 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 15:03:50.561670  392791 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 15:03:50.561699  392791 start.go:495] detecting cgroup driver to use...
	I1008 15:03:50.561782  392791 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 15:03:50.583072  392791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 15:03:50.600563  392791 docker.go:218] disabling cri-docker service (if available) ...
	I1008 15:03:50.600651  392791 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 15:03:50.619663  392791 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 15:03:50.637281  392791 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 15:03:50.787322  392791 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 15:03:51.007035  392791 docker.go:234] disabling docker service ...
	I1008 15:03:51.007156  392791 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 15:03:51.024787  392791 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 15:03:51.040948  392791 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 15:03:51.193995  392791 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 15:03:51.346702  392791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 15:03:51.363059  392791 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 15:03:51.386789  392791 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1008 15:03:51.386875  392791 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:03:51.399818  392791 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 15:03:51.399893  392791 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:03:51.413048  392791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:03:51.426107  392791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:03:51.439401  392791 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 15:03:51.453093  392791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:03:51.465976  392791 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:03:51.487508  392791 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:03:51.504946  392791 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 15:03:51.517105  392791 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 15:03:51.517175  392791 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 15:03:51.538185  392791 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 15:03:51.550694  392791 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:03:51.697442  392791 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 15:03:51.817879  392791 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 15:03:51.817979  392791 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 15:03:51.823775  392791 start.go:563] Will wait 60s for crictl version
	I1008 15:03:51.823844  392791 ssh_runner.go:195] Run: which crictl
	I1008 15:03:51.828168  392791 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 15:03:51.873151  392791 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 15:03:51.873248  392791 ssh_runner.go:195] Run: crio --version
	I1008 15:03:51.905609  392791 ssh_runner.go:195] Run: crio --version
	I1008 15:03:51.938864  392791 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1008 15:03:51.940309  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetIP
	I1008 15:03:51.943567  392791 main.go:141] libmachine: (test-preload-105038) DBG | domain test-preload-105038 has defined MAC address 52:54:00:2b:03:bd in network mk-test-preload-105038
	I1008 15:03:51.943915  392791 main.go:141] libmachine: (test-preload-105038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:03:bd", ip: ""} in network mk-test-preload-105038: {Iface:virbr1 ExpiryTime:2025-10-08 16:03:45 +0000 UTC Type:0 Mac:52:54:00:2b:03:bd Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:test-preload-105038 Clientid:01:52:54:00:2b:03:bd}
	I1008 15:03:51.943947  392791 main.go:141] libmachine: (test-preload-105038) DBG | domain test-preload-105038 has defined IP address 192.168.39.121 and MAC address 52:54:00:2b:03:bd in network mk-test-preload-105038
	I1008 15:03:51.944242  392791 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1008 15:03:51.949434  392791 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:03:51.965874  392791 kubeadm.go:883] updating cluster {Name:test-preload-105038 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-105038 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 15:03:51.966082  392791 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1008 15:03:51.966153  392791 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:03:52.013981  392791 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1008 15:03:52.014053  392791 ssh_runner.go:195] Run: which lz4
	I1008 15:03:52.018484  392791 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1008 15:03:52.023414  392791 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1008 15:03:52.023456  392791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1008 15:03:53.557242  392791 crio.go:462] duration metric: took 1.538784454s to copy over tarball
	I1008 15:03:53.557341  392791 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1008 15:03:55.291252  392791 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.733874807s)
	I1008 15:03:55.291299  392791 crio.go:469] duration metric: took 1.734014789s to extract the tarball
	I1008 15:03:55.291310  392791 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1008 15:03:55.332280  392791 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:03:55.376971  392791 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:03:55.377004  392791 cache_images.go:85] Images are preloaded, skipping loading
	I1008 15:03:55.377014  392791 kubeadm.go:934] updating node { 192.168.39.121 8443 v1.32.0 crio true true} ...
	I1008 15:03:55.377144  392791 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-105038 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.121
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-105038 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 15:03:55.377216  392791 ssh_runner.go:195] Run: crio config
	I1008 15:03:55.425051  392791 cni.go:84] Creating CNI manager for ""
	I1008 15:03:55.425078  392791 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 15:03:55.425098  392791 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 15:03:55.425122  392791 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.121 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-105038 NodeName:test-preload-105038 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.121"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.121 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 15:03:55.425269  392791 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.121
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-105038"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.121"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.121"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 15:03:55.425334  392791 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1008 15:03:55.437748  392791 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 15:03:55.437838  392791 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 15:03:55.449552  392791 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1008 15:03:55.470161  392791 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 15:03:55.490685  392791 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1008 15:03:55.513164  392791 ssh_runner.go:195] Run: grep 192.168.39.121	control-plane.minikube.internal$ /etc/hosts
	I1008 15:03:55.517495  392791 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.121	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 15:03:55.532542  392791 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:03:55.670279  392791 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 15:03:55.700392  392791 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/test-preload-105038 for IP: 192.168.39.121
	I1008 15:03:55.700420  392791 certs.go:195] generating shared ca certs ...
	I1008 15:03:55.700441  392791 certs.go:227] acquiring lock for ca certs: {Name:mk0e7909a623394743b0dc10595ebb34d09a814f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:03:55.700640  392791 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-357044/.minikube/ca.key
	I1008 15:03:55.700723  392791 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-357044/.minikube/proxy-client-ca.key
	I1008 15:03:55.700741  392791 certs.go:257] generating profile certs ...
	I1008 15:03:55.700870  392791 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/test-preload-105038/client.key
	I1008 15:03:55.700952  392791 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/test-preload-105038/apiserver.key.6dca4408
	I1008 15:03:55.701008  392791 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/test-preload-105038/proxy-client.key
	I1008 15:03:55.701158  392791 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-357044/.minikube/certs/361915.pem (1338 bytes)
	W1008 15:03:55.701197  392791 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-357044/.minikube/certs/361915_empty.pem, impossibly tiny 0 bytes
	I1008 15:03:55.701208  392791 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-357044/.minikube/certs/ca-key.pem (1679 bytes)
	I1008 15:03:55.701241  392791 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-357044/.minikube/certs/ca.pem (1082 bytes)
	I1008 15:03:55.701277  392791 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-357044/.minikube/certs/cert.pem (1123 bytes)
	I1008 15:03:55.701306  392791 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-357044/.minikube/certs/key.pem (1675 bytes)
	I1008 15:03:55.701387  392791 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-357044/.minikube/files/etc/ssl/certs/3619152.pem (1708 bytes)
	I1008 15:03:55.702033  392791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 15:03:55.735261  392791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 15:03:55.768333  392791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 15:03:55.801718  392791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 15:03:55.834076  392791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/test-preload-105038/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1008 15:03:55.865146  392791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/test-preload-105038/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 15:03:55.896894  392791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/test-preload-105038/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 15:03:55.928548  392791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/test-preload-105038/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 15:03:55.960456  392791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 15:03:55.991841  392791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/certs/361915.pem --> /usr/share/ca-certificates/361915.pem (1338 bytes)
	I1008 15:03:56.022951  392791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/files/etc/ssl/certs/3619152.pem --> /usr/share/ca-certificates/3619152.pem (1708 bytes)
	I1008 15:03:56.054038  392791 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 15:03:56.076098  392791 ssh_runner.go:195] Run: openssl version
	I1008 15:03:56.083296  392791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 15:03:56.097122  392791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:03:56.103030  392791 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:10 /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:03:56.103105  392791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:03:56.110874  392791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 15:03:56.125052  392791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/361915.pem && ln -fs /usr/share/ca-certificates/361915.pem /etc/ssl/certs/361915.pem"
	I1008 15:03:56.139508  392791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/361915.pem
	I1008 15:03:56.145218  392791 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:18 /usr/share/ca-certificates/361915.pem
	I1008 15:03:56.145295  392791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/361915.pem
	I1008 15:03:56.152843  392791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/361915.pem /etc/ssl/certs/51391683.0"
	I1008 15:03:56.167389  392791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3619152.pem && ln -fs /usr/share/ca-certificates/3619152.pem /etc/ssl/certs/3619152.pem"
	I1008 15:03:56.181737  392791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3619152.pem
	I1008 15:03:56.187231  392791 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:18 /usr/share/ca-certificates/3619152.pem
	I1008 15:03:56.187306  392791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3619152.pem
	I1008 15:03:56.194806  392791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3619152.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 15:03:56.208509  392791 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 15:03:56.214349  392791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 15:03:56.222403  392791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 15:03:56.230241  392791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 15:03:56.238424  392791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 15:03:56.246325  392791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 15:03:56.254147  392791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 15:03:56.262239  392791 kubeadm.go:400] StartCluster: {Name:test-preload-105038 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-105038 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:03:56.262365  392791 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 15:03:56.262439  392791 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 15:03:56.303087  392791 cri.go:89] found id: ""
	I1008 15:03:56.303162  392791 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 15:03:56.316152  392791 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1008 15:03:56.316179  392791 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1008 15:03:56.316244  392791 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 15:03:56.328689  392791 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 15:03:56.329229  392791 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-105038" does not appear in /home/jenkins/minikube-integration/21681-357044/kubeconfig
	I1008 15:03:56.329398  392791 kubeconfig.go:62] /home/jenkins/minikube-integration/21681-357044/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-105038" cluster setting kubeconfig missing "test-preload-105038" context setting]
	I1008 15:03:56.329925  392791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-357044/kubeconfig: {Name:mk16a3f122b6b062cdcb94a3a6f8de0fc11cf727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:03:56.330654  392791 kapi.go:59] client config for test-preload-105038: &rest.Config{Host:"https://192.168.39.121:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-357044/.minikube/profiles/test-preload-105038/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-357044/.minikube/profiles/test-preload-105038/client.key", CAFile:"/home/jenkins/minikube-integration/21681-357044/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 15:03:56.331203  392791 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1008 15:03:56.331223  392791 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1008 15:03:56.331228  392791 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1008 15:03:56.331233  392791 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1008 15:03:56.331238  392791 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1008 15:03:56.331745  392791 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 15:03:56.344199  392791 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.39.121
	I1008 15:03:56.344252  392791 kubeadm.go:1160] stopping kube-system containers ...
	I1008 15:03:56.344268  392791 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1008 15:03:56.344338  392791 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 15:03:56.388347  392791 cri.go:89] found id: ""
	I1008 15:03:56.388449  392791 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1008 15:03:56.412022  392791 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 15:03:56.425060  392791 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 15:03:56.425092  392791 kubeadm.go:157] found existing configuration files:
	
	I1008 15:03:56.425156  392791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 15:03:56.436936  392791 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 15:03:56.437022  392791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 15:03:56.449477  392791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 15:03:56.461050  392791 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 15:03:56.461137  392791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 15:03:56.473626  392791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 15:03:56.484742  392791 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 15:03:56.484814  392791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 15:03:56.497152  392791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 15:03:56.508692  392791 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 15:03:56.508759  392791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 15:03:56.521430  392791 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 15:03:56.534201  392791 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 15:03:56.594318  392791 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 15:03:57.957511  392791 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.363146537s)
	I1008 15:03:57.957621  392791 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1008 15:03:58.206420  392791 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 15:03:58.266468  392791 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1008 15:03:58.352898  392791 api_server.go:52] waiting for apiserver process to appear ...
	I1008 15:03:58.352999  392791 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 15:03:58.853604  392791 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 15:03:59.353612  392791 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 15:03:59.853617  392791 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 15:04:00.354094  392791 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 15:04:00.853099  392791 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 15:04:00.885866  392791 api_server.go:72] duration metric: took 2.532973525s to wait for apiserver process to appear ...
	I1008 15:04:00.885903  392791 api_server.go:88] waiting for apiserver healthz status ...
	I1008 15:04:00.885930  392791 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I1008 15:04:03.145329  392791 api_server.go:279] https://192.168.39.121:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1008 15:04:03.145387  392791 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1008 15:04:03.145406  392791 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I1008 15:04:03.183065  392791 api_server.go:279] https://192.168.39.121:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1008 15:04:03.183098  392791 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1008 15:04:03.386471  392791 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I1008 15:04:03.399690  392791 api_server.go:279] https://192.168.39.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 15:04:03.399728  392791 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 15:04:03.886730  392791 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I1008 15:04:03.895802  392791 api_server.go:279] https://192.168.39.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 15:04:03.895845  392791 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 15:04:04.386424  392791 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I1008 15:04:04.390955  392791 api_server.go:279] https://192.168.39.121:8443/healthz returned 200:
	ok
	I1008 15:04:04.398454  392791 api_server.go:141] control plane version: v1.32.0
	I1008 15:04:04.398487  392791 api_server.go:131] duration metric: took 3.512576668s to wait for apiserver health ...
	I1008 15:04:04.398498  392791 cni.go:84] Creating CNI manager for ""
	I1008 15:04:04.398505  392791 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 15:04:04.400391  392791 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1008 15:04:04.401720  392791 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1008 15:04:04.415174  392791 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1008 15:04:04.461182  392791 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 15:04:04.469818  392791 system_pods.go:59] 7 kube-system pods found
	I1008 15:04:04.469860  392791 system_pods.go:61] "coredns-668d6bf9bc-dfvqn" [adc53a59-d20c-41bc-a93c-43cbc2178943] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 15:04:04.469868  392791 system_pods.go:61] "etcd-test-preload-105038" [6398025c-7d64-44bf-8778-668e65288a3c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1008 15:04:04.469877  392791 system_pods.go:61] "kube-apiserver-test-preload-105038" [34690e7b-c1c3-4d18-8b28-b9250b24f6e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 15:04:04.469885  392791 system_pods.go:61] "kube-controller-manager-test-preload-105038" [ff6a9f31-b88f-4c82-bb8b-2c3dae4af778] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 15:04:04.469893  392791 system_pods.go:61] "kube-proxy-bmqnw" [d0beba0e-e9aa-44eb-ad9d-7e995667518b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1008 15:04:04.469901  392791 system_pods.go:61] "kube-scheduler-test-preload-105038" [9c1d2c1a-dbe1-4765-90ca-7605e1918c05] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 15:04:04.469909  392791 system_pods.go:61] "storage-provisioner" [dcd88dea-f911-4df2-a501-de8c7912ef32] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 15:04:04.469924  392791 system_pods.go:74] duration metric: took 8.712497ms to wait for pod list to return data ...
	I1008 15:04:04.469936  392791 node_conditions.go:102] verifying NodePressure condition ...
	I1008 15:04:04.486876  392791 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 15:04:04.486914  392791 node_conditions.go:123] node cpu capacity is 2
	I1008 15:04:04.486927  392791 node_conditions.go:105] duration metric: took 16.986003ms to run NodePressure ...
	I1008 15:04:04.486990  392791 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 15:04:04.761570  392791 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1008 15:04:04.765686  392791 kubeadm.go:743] kubelet initialised
	I1008 15:04:04.765715  392791 kubeadm.go:744] duration metric: took 4.114176ms waiting for restarted kubelet to initialise ...
	I1008 15:04:04.765734  392791 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 15:04:04.782234  392791 ops.go:34] apiserver oom_adj: -16
	I1008 15:04:04.782269  392791 kubeadm.go:601] duration metric: took 8.466082169s to restartPrimaryControlPlane
	I1008 15:04:04.782283  392791 kubeadm.go:402] duration metric: took 8.520055478s to StartCluster
	I1008 15:04:04.782304  392791 settings.go:142] acquiring lock: {Name:mk117bd4e067de4a07a0962f9cb0a7e9e4347a17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:04:04.782404  392791 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-357044/kubeconfig
	I1008 15:04:04.783087  392791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-357044/kubeconfig: {Name:mk16a3f122b6b062cdcb94a3a6f8de0fc11cf727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:04:04.783331  392791 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.121 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 15:04:04.783478  392791 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 15:04:04.783579  392791 config.go:182] Loaded profile config "test-preload-105038": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1008 15:04:04.783692  392791 addons.go:69] Setting storage-provisioner=true in profile "test-preload-105038"
	I1008 15:04:04.783727  392791 addons.go:238] Setting addon storage-provisioner=true in "test-preload-105038"
	I1008 15:04:04.783724  392791 addons.go:69] Setting default-storageclass=true in profile "test-preload-105038"
	W1008 15:04:04.783737  392791 addons.go:247] addon storage-provisioner should already be in state true
	I1008 15:04:04.783748  392791 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-105038"
	I1008 15:04:04.783774  392791 host.go:66] Checking if "test-preload-105038" exists ...
	I1008 15:04:04.784219  392791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 15:04:04.784240  392791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 15:04:04.784266  392791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 15:04:04.784384  392791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 15:04:04.785303  392791 out.go:179] * Verifying Kubernetes components...
	I1008 15:04:04.786869  392791 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:04:04.798912  392791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45375
	I1008 15:04:04.799290  392791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38893
	I1008 15:04:04.799535  392791 main.go:141] libmachine: () Calling .GetVersion
	I1008 15:04:04.799725  392791 main.go:141] libmachine: () Calling .GetVersion
	I1008 15:04:04.800109  392791 main.go:141] libmachine: Using API Version  1
	I1008 15:04:04.800123  392791 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 15:04:04.800267  392791 main.go:141] libmachine: Using API Version  1
	I1008 15:04:04.800297  392791 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 15:04:04.800530  392791 main.go:141] libmachine: () Calling .GetMachineName
	I1008 15:04:04.800665  392791 main.go:141] libmachine: () Calling .GetMachineName
	I1008 15:04:04.800736  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetState
	I1008 15:04:04.801295  392791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 15:04:04.801349  392791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 15:04:04.803226  392791 kapi.go:59] client config for test-preload-105038: &rest.Config{Host:"https://192.168.39.121:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-357044/.minikube/profiles/test-preload-105038/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-357044/.minikube/profiles/test-preload-105038/client.key", CAFile:"/home/jenkins/minikube-integration/21681-357044/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 15:04:04.803623  392791 addons.go:238] Setting addon default-storageclass=true in "test-preload-105038"
	W1008 15:04:04.803648  392791 addons.go:247] addon default-storageclass should already be in state true
	I1008 15:04:04.803686  392791 host.go:66] Checking if "test-preload-105038" exists ...
	I1008 15:04:04.803984  392791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 15:04:04.804025  392791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 15:04:04.817032  392791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39921
	I1008 15:04:04.817533  392791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41135
	I1008 15:04:04.817627  392791 main.go:141] libmachine: () Calling .GetVersion
	I1008 15:04:04.818081  392791 main.go:141] libmachine: () Calling .GetVersion
	I1008 15:04:04.818239  392791 main.go:141] libmachine: Using API Version  1
	I1008 15:04:04.818265  392791 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 15:04:04.818659  392791 main.go:141] libmachine: Using API Version  1
	I1008 15:04:04.818680  392791 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 15:04:04.818686  392791 main.go:141] libmachine: () Calling .GetMachineName
	I1008 15:04:04.818895  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetState
	I1008 15:04:04.818993  392791 main.go:141] libmachine: () Calling .GetMachineName
	I1008 15:04:04.819629  392791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 15:04:04.819677  392791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 15:04:04.821401  392791 main.go:141] libmachine: (test-preload-105038) Calling .DriverName
	I1008 15:04:04.826902  392791 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 15:04:04.828383  392791 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 15:04:04.828406  392791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 15:04:04.828433  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHHostname
	I1008 15:04:04.832650  392791 main.go:141] libmachine: (test-preload-105038) DBG | domain test-preload-105038 has defined MAC address 52:54:00:2b:03:bd in network mk-test-preload-105038
	I1008 15:04:04.833295  392791 main.go:141] libmachine: (test-preload-105038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:03:bd", ip: ""} in network mk-test-preload-105038: {Iface:virbr1 ExpiryTime:2025-10-08 16:03:45 +0000 UTC Type:0 Mac:52:54:00:2b:03:bd Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:test-preload-105038 Clientid:01:52:54:00:2b:03:bd}
	I1008 15:04:04.833329  392791 main.go:141] libmachine: (test-preload-105038) DBG | domain test-preload-105038 has defined IP address 192.168.39.121 and MAC address 52:54:00:2b:03:bd in network mk-test-preload-105038
	I1008 15:04:04.833554  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHPort
	I1008 15:04:04.833797  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHKeyPath
	I1008 15:04:04.833970  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHUsername
	I1008 15:04:04.834135  392791 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/test-preload-105038/id_rsa Username:docker}
	I1008 15:04:04.836420  392791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43377
	I1008 15:04:04.837002  392791 main.go:141] libmachine: () Calling .GetVersion
	I1008 15:04:04.837560  392791 main.go:141] libmachine: Using API Version  1
	I1008 15:04:04.837580  392791 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 15:04:04.837946  392791 main.go:141] libmachine: () Calling .GetMachineName
	I1008 15:04:04.838243  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetState
	I1008 15:04:04.840234  392791 main.go:141] libmachine: (test-preload-105038) Calling .DriverName
	I1008 15:04:04.840485  392791 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 15:04:04.840503  392791 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 15:04:04.840522  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHHostname
	I1008 15:04:04.844062  392791 main.go:141] libmachine: (test-preload-105038) DBG | domain test-preload-105038 has defined MAC address 52:54:00:2b:03:bd in network mk-test-preload-105038
	I1008 15:04:04.844621  392791 main.go:141] libmachine: (test-preload-105038) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:03:bd", ip: ""} in network mk-test-preload-105038: {Iface:virbr1 ExpiryTime:2025-10-08 16:03:45 +0000 UTC Type:0 Mac:52:54:00:2b:03:bd Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:test-preload-105038 Clientid:01:52:54:00:2b:03:bd}
	I1008 15:04:04.844645  392791 main.go:141] libmachine: (test-preload-105038) DBG | domain test-preload-105038 has defined IP address 192.168.39.121 and MAC address 52:54:00:2b:03:bd in network mk-test-preload-105038
	I1008 15:04:04.844905  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHPort
	I1008 15:04:04.845176  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHKeyPath
	I1008 15:04:04.845444  392791 main.go:141] libmachine: (test-preload-105038) Calling .GetSSHUsername
	I1008 15:04:04.845813  392791 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/test-preload-105038/id_rsa Username:docker}
	I1008 15:04:04.996999  392791 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 15:04:05.017428  392791 node_ready.go:35] waiting up to 6m0s for node "test-preload-105038" to be "Ready" ...
	I1008 15:04:05.020818  392791 node_ready.go:49] node "test-preload-105038" is "Ready"
	I1008 15:04:05.020850  392791 node_ready.go:38] duration metric: took 3.362651ms for node "test-preload-105038" to be "Ready" ...
	I1008 15:04:05.020870  392791 api_server.go:52] waiting for apiserver process to appear ...
	I1008 15:04:05.020921  392791 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 15:04:05.040103  392791 api_server.go:72] duration metric: took 256.727143ms to wait for apiserver process to appear ...
	I1008 15:04:05.040134  392791 api_server.go:88] waiting for apiserver healthz status ...
	I1008 15:04:05.040155  392791 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I1008 15:04:05.044555  392791 api_server.go:279] https://192.168.39.121:8443/healthz returned 200:
	ok
	I1008 15:04:05.045696  392791 api_server.go:141] control plane version: v1.32.0
	I1008 15:04:05.045723  392791 api_server.go:131] duration metric: took 5.580624ms to wait for apiserver health ...
	I1008 15:04:05.045736  392791 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 15:04:05.049482  392791 system_pods.go:59] 7 kube-system pods found
	I1008 15:04:05.049512  392791 system_pods.go:61] "coredns-668d6bf9bc-dfvqn" [adc53a59-d20c-41bc-a93c-43cbc2178943] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 15:04:05.049519  392791 system_pods.go:61] "etcd-test-preload-105038" [6398025c-7d64-44bf-8778-668e65288a3c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1008 15:04:05.049528  392791 system_pods.go:61] "kube-apiserver-test-preload-105038" [34690e7b-c1c3-4d18-8b28-b9250b24f6e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 15:04:05.049535  392791 system_pods.go:61] "kube-controller-manager-test-preload-105038" [ff6a9f31-b88f-4c82-bb8b-2c3dae4af778] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 15:04:05.049541  392791 system_pods.go:61] "kube-proxy-bmqnw" [d0beba0e-e9aa-44eb-ad9d-7e995667518b] Running
	I1008 15:04:05.049549  392791 system_pods.go:61] "kube-scheduler-test-preload-105038" [9c1d2c1a-dbe1-4765-90ca-7605e1918c05] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 15:04:05.049558  392791 system_pods.go:61] "storage-provisioner" [dcd88dea-f911-4df2-a501-de8c7912ef32] Running
	I1008 15:04:05.049604  392791 system_pods.go:74] duration metric: took 3.825364ms to wait for pod list to return data ...
	I1008 15:04:05.049618  392791 default_sa.go:34] waiting for default service account to be created ...
	I1008 15:04:05.052579  392791 default_sa.go:45] found service account: "default"
	I1008 15:04:05.052641  392791 default_sa.go:55] duration metric: took 3.010454ms for default service account to be created ...
	I1008 15:04:05.052651  392791 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 15:04:05.056302  392791 system_pods.go:86] 7 kube-system pods found
	I1008 15:04:05.056341  392791 system_pods.go:89] "coredns-668d6bf9bc-dfvqn" [adc53a59-d20c-41bc-a93c-43cbc2178943] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 15:04:05.056365  392791 system_pods.go:89] "etcd-test-preload-105038" [6398025c-7d64-44bf-8778-668e65288a3c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1008 15:04:05.056380  392791 system_pods.go:89] "kube-apiserver-test-preload-105038" [34690e7b-c1c3-4d18-8b28-b9250b24f6e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 15:04:05.056389  392791 system_pods.go:89] "kube-controller-manager-test-preload-105038" [ff6a9f31-b88f-4c82-bb8b-2c3dae4af778] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 15:04:05.056400  392791 system_pods.go:89] "kube-proxy-bmqnw" [d0beba0e-e9aa-44eb-ad9d-7e995667518b] Running
	I1008 15:04:05.056409  392791 system_pods.go:89] "kube-scheduler-test-preload-105038" [9c1d2c1a-dbe1-4765-90ca-7605e1918c05] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 15:04:05.056414  392791 system_pods.go:89] "storage-provisioner" [dcd88dea-f911-4df2-a501-de8c7912ef32] Running
	I1008 15:04:05.056428  392791 system_pods.go:126] duration metric: took 3.768572ms to wait for k8s-apps to be running ...
	I1008 15:04:05.056442  392791 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 15:04:05.056502  392791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 15:04:05.073625  392791 system_svc.go:56] duration metric: took 17.167456ms WaitForService to wait for kubelet
	I1008 15:04:05.073658  392791 kubeadm.go:586] duration metric: took 290.288499ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 15:04:05.073678  392791 node_conditions.go:102] verifying NodePressure condition ...
	I1008 15:04:05.078133  392791 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1008 15:04:05.078159  392791 node_conditions.go:123] node cpu capacity is 2
	I1008 15:04:05.078171  392791 node_conditions.go:105] duration metric: took 4.488707ms to run NodePressure ...
	I1008 15:04:05.078184  392791 start.go:241] waiting for startup goroutines ...
	I1008 15:04:05.151871  392791 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 15:04:05.156012  392791 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 15:04:05.447193  392791 main.go:141] libmachine: Making call to close driver server
	I1008 15:04:05.447226  392791 main.go:141] libmachine: (test-preload-105038) Calling .Close
	I1008 15:04:05.447549  392791 main.go:141] libmachine: Successfully made call to close driver server
	I1008 15:04:05.447577  392791 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 15:04:05.447596  392791 main.go:141] libmachine: Making call to close driver server
	I1008 15:04:05.447605  392791 main.go:141] libmachine: (test-preload-105038) Calling .Close
	I1008 15:04:05.447858  392791 main.go:141] libmachine: Successfully made call to close driver server
	I1008 15:04:05.447871  392791 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 15:04:05.454759  392791 main.go:141] libmachine: Making call to close driver server
	I1008 15:04:05.454780  392791 main.go:141] libmachine: (test-preload-105038) Calling .Close
	I1008 15:04:05.455085  392791 main.go:141] libmachine: Successfully made call to close driver server
	I1008 15:04:05.455105  392791 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 15:04:05.455115  392791 main.go:141] libmachine: (test-preload-105038) DBG | Closing plugin on server side
	I1008 15:04:05.925214  392791 main.go:141] libmachine: Making call to close driver server
	I1008 15:04:05.925253  392791 main.go:141] libmachine: (test-preload-105038) Calling .Close
	I1008 15:04:05.925568  392791 main.go:141] libmachine: Successfully made call to close driver server
	I1008 15:04:05.925590  392791 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 15:04:05.925600  392791 main.go:141] libmachine: Making call to close driver server
	I1008 15:04:05.925608  392791 main.go:141] libmachine: (test-preload-105038) Calling .Close
	I1008 15:04:05.925609  392791 main.go:141] libmachine: (test-preload-105038) DBG | Closing plugin on server side
	I1008 15:04:05.925836  392791 main.go:141] libmachine: (test-preload-105038) DBG | Closing plugin on server side
	I1008 15:04:05.926009  392791 main.go:141] libmachine: Successfully made call to close driver server
	I1008 15:04:05.926020  392791 main.go:141] libmachine: Making call to close connection to plugin binary
	I1008 15:04:05.927978  392791 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1008 15:04:05.929060  392791 addons.go:514] duration metric: took 1.145596754s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1008 15:04:05.929097  392791 start.go:246] waiting for cluster config update ...
	I1008 15:04:05.929113  392791 start.go:255] writing updated cluster config ...
	I1008 15:04:05.929367  392791 ssh_runner.go:195] Run: rm -f paused
	I1008 15:04:05.934629  392791 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 15:04:05.935112  392791 kapi.go:59] client config for test-preload-105038: &rest.Config{Host:"https://192.168.39.121:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21681-357044/.minikube/profiles/test-preload-105038/client.crt", KeyFile:"/home/jenkins/minikube-integration/21681-357044/.minikube/profiles/test-preload-105038/client.key", CAFile:"/home/jenkins/minikube-integration/21681-357044/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 15:04:05.939183  392791 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-dfvqn" in "kube-system" namespace to be "Ready" or be gone ...
	W1008 15:04:07.946699  392791 pod_ready.go:104] pod "coredns-668d6bf9bc-dfvqn" is not "Ready", error: <nil>
	W1008 15:04:09.946931  392791 pod_ready.go:104] pod "coredns-668d6bf9bc-dfvqn" is not "Ready", error: <nil>
	W1008 15:04:12.446157  392791 pod_ready.go:104] pod "coredns-668d6bf9bc-dfvqn" is not "Ready", error: <nil>
	W1008 15:04:14.944475  392791 pod_ready.go:104] pod "coredns-668d6bf9bc-dfvqn" is not "Ready", error: <nil>
	I1008 15:04:15.450127  392791 pod_ready.go:94] pod "coredns-668d6bf9bc-dfvqn" is "Ready"
	I1008 15:04:15.450157  392791 pod_ready.go:86] duration metric: took 9.510951328s for pod "coredns-668d6bf9bc-dfvqn" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 15:04:15.455192  392791 pod_ready.go:83] waiting for pod "etcd-test-preload-105038" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 15:04:15.459514  392791 pod_ready.go:94] pod "etcd-test-preload-105038" is "Ready"
	I1008 15:04:15.459541  392791 pod_ready.go:86] duration metric: took 4.322435ms for pod "etcd-test-preload-105038" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 15:04:15.465405  392791 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-105038" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 15:04:15.477785  392791 pod_ready.go:94] pod "kube-apiserver-test-preload-105038" is "Ready"
	I1008 15:04:15.477816  392791 pod_ready.go:86] duration metric: took 12.384939ms for pod "kube-apiserver-test-preload-105038" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 15:04:15.484745  392791 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-105038" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 15:04:16.044300  392791 pod_ready.go:94] pod "kube-controller-manager-test-preload-105038" is "Ready"
	I1008 15:04:16.044331  392791 pod_ready.go:86] duration metric: took 559.558357ms for pod "kube-controller-manager-test-preload-105038" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 15:04:16.244132  392791 pod_ready.go:83] waiting for pod "kube-proxy-bmqnw" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 15:04:16.643869  392791 pod_ready.go:94] pod "kube-proxy-bmqnw" is "Ready"
	I1008 15:04:16.643909  392791 pod_ready.go:86] duration metric: took 399.746084ms for pod "kube-proxy-bmqnw" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 15:04:16.843420  392791 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-105038" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 15:04:17.244074  392791 pod_ready.go:94] pod "kube-scheduler-test-preload-105038" is "Ready"
	I1008 15:04:17.244118  392791 pod_ready.go:86] duration metric: took 400.670274ms for pod "kube-scheduler-test-preload-105038" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 15:04:17.244135  392791 pod_ready.go:40] duration metric: took 11.309466359s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 15:04:17.289099  392791 start.go:624] kubectl: 1.34.1, cluster: 1.32.0 (minor skew: 2)
	I1008 15:04:17.290955  392791 out.go:203] 
	W1008 15:04:17.292224  392791 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.32.0.
	I1008 15:04:17.293348  392791 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1008 15:04:17.294580  392791 out.go:179] * Done! kubectl is now configured to use "test-preload-105038" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 08 15:04:18 test-preload-105038 crio[843]: time="2025-10-08 15:04:18.352546844Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:18b72b8932b0f875ba65061a900160782b36d2db137f5e9b499fb65748b4212c,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-dfvqn,Uid:adc53a59-d20c-41bc-a93c-43cbc2178943,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759935847156805036,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-dfvqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adc53a59-d20c-41bc-a93c-43cbc2178943,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-08T15:04:03.284904163Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c6c42588f0a21111dafa3eb39c5e5ab6ec100a51528c027282dbd39bf2e03acd,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:dcd88dea-f911-4df2-a501-de8c7912ef32,Namespace:kube-syste
m,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759935843620515588,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd88dea-f911-4df2-a501-de8c7912ef32,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath
\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-10-08T15:04:03.284902656Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5f0f1ca84c63b6a94beaedb41fcd6ae79bb168007fb40a31a6920a4392491396,Metadata:&PodSandboxMetadata{Name:kube-proxy-bmqnw,Uid:d0beba0e-e9aa-44eb-ad9d-7e995667518b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759935843594854389,Labels:map[string]string{controller-revision-hash: 64b9dbc74b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-bmqnw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0beba0e-e9aa-44eb-ad9d-7e995667518b,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-08T15:04:03.284899938Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:347e61bd6a7b034a9f457b0220116ca8f17e2cb4e63951005700e7da8971edbf,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-105038,Uid:9dbd0f209e8d2bb79
dc4cc2179a2f959,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759935840018476994,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-105038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dbd0f209e8d2bb79dc4cc2179a2f959,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.121:2379,kubernetes.io/config.hash: 9dbd0f209e8d2bb79dc4cc2179a2f959,kubernetes.io/config.seen: 2025-10-08T15:03:58.330774692Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:da0eea060b28a47daec12c3aa1ef3b608f7b0681d21098e3fd0d0bf58d12c399,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-105038,Uid:36daf82cbadb402e6661bce291bf08f3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759935840008800667,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube
-controller-manager-test-preload-105038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36daf82cbadb402e6661bce291bf08f3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 36daf82cbadb402e6661bce291bf08f3,kubernetes.io/config.seen: 2025-10-08T15:03:58.275322313Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f45d9cdd496d77236fca6faed8526d794b67abcef0fa64141a9ff15ee555d709,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-105038,Uid:d1866b584bb5e116f590a9cda1e25628,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759935840006899442,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-105038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1866b584bb5e116f590a9cda1e25628,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d1866b584bb5e116f590a9cda1e25628,kubernetes.io/config.seen: 2025-10-08T15
:03:58.275323602Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:abd283dc2dd344096c8e8768005233215b510550d7898173a2bea9b199d50459,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-105038,Uid:0895485d8ab189043f1cfc9b0c38f6fb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759935840003443652,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-105038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0895485d8ab189043f1cfc9b0c38f6fb,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.121:8443,kubernetes.io/config.hash: 0895485d8ab189043f1cfc9b0c38f6fb,kubernetes.io/config.seen: 2025-10-08T15:03:58.275316469Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=38424393-0d25-46ba-9882-8b502588d1ee name=/runtime.v1.RuntimeService/ListPodSandbox

                                                
                                                
	Oct 08 15:04:18 test-preload-105038 crio[843]: time="2025-10-08 15:04:18.355653069Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=957dcb0a-c03c-472f-8a55-be7c3a893164 name=/runtime.v1.RuntimeService/Status
	Oct 08 15:04:18 test-preload-105038 crio[843]: time="2025-10-08 15:04:18.355731203Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=957dcb0a-c03c-472f-8a55-be7c3a893164 name=/runtime.v1.RuntimeService/Status
	Oct 08 15:04:18 test-preload-105038 crio[843]: time="2025-10-08 15:04:18.355687730Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:cc428270803c64298f2d180627d95e321113e245027d94405880fc6fe93aabca,Verbose:false,}" file="otel-collector/interceptors.go:62" id=08d1fe54-36b1-4484-8814-c15681ee1a29 name=/runtime.v1.RuntimeService/ContainerStatus
	Oct 08 15:04:18 test-preload-105038 crio[843]: time="2025-10-08 15:04:18.355897259Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d85477b2-77cc-4bbc-89c1-0d7ec6439a76 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 15:04:18 test-preload-105038 crio[843]: time="2025-10-08 15:04:18.357763187Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d85477b2-77cc-4bbc-89c1-0d7ec6439a76 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 15:04:18 test-preload-105038 crio[843]: time="2025-10-08 15:04:18.358087560Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:387ca45d910626dc0da2785e711d3f3c38963b8619b781d9d0c3c0e6202280c6,PodSandboxId:18b72b8932b0f875ba65061a900160782b36d2db137f5e9b499fb65748b4212c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1759935847371395869,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-dfvqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adc53a59-d20c-41bc-a93c-43cbc2178943,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc428270803c64298f2d180627d95e321113e245027d94405880fc6fe93aabca,PodSandboxId:5f0f1ca84c63b6a94beaedb41fcd6ae79bb168007fb40a31a6920a4392491396,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1759935843766667797,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmqnw,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d0beba0e-e9aa-44eb-ad9d-7e995667518b,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d723a8671de3de678c19afdae8517a82c6c03ba8a3f718b7aa82732f662e213b,PodSandboxId:c6c42588f0a21111dafa3eb39c5e5ab6ec100a51528c027282dbd39bf2e03acd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759935843787200689,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc
d88dea-f911-4df2-a501-de8c7912ef32,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4de31bee72fe920855865fea1d1a057959546251373f157be0de875988ace17e,PodSandboxId:347e61bd6a7b034a9f457b0220116ca8f17e2cb4e63951005700e7da8971edbf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1759935840262656124,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-105038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dbd0f209e8d2bb79dc4cc2179a2f959,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34cd7d1502b622420121562d1ec501c828b6042259c7c7b27ac0147ead3db3fe,PodSandboxId:da0eea060b28a47daec12c3aa1ef3b608f7b0681d21098e3fd0d0bf58d12c399,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1759935840250561599,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-105038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36daf82cbadb402e6661bce
291bf08f3,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd36e0608a36f5b322b9dc62dbffb8471a3337b1002b56d37218683fc9be206,PodSandboxId:abd283dc2dd344096c8e8768005233215b510550d7898173a2bea9b199d50459,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1759935840249183620,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-105038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0895485d8ab189043f1cfc9b0c38f6fb,}
,Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1793d1c0c2ec762b4da1776c0b9ac233958c44d3842a902c6bc9bc85e128088,PodSandboxId:f45d9cdd496d77236fca6faed8526d794b67abcef0fa64141a9ff15ee555d709,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1759935840205542525,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-105038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1866b584bb5e116f590a9cda1e25628,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d85477b2-77cc-4bbc-89c1-0d7ec6439a76 name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 15:04:18 test-preload-105038 crio[843]: time="2025-10-08 15:04:18.355901470Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:cc428270803c64298f2d180627d95e321113e245027d94405880fc6fe93aabca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1759935843828586005,StartedAt:1759935843872542020,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-proxy:v1.32.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmqnw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0beba0e-e9aa-44eb-ad9d-7e995667518b,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/d0beba0e-e9aa-44eb-ad9d-7e995667518b/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/d0beba0e-e9aa-44eb-ad9d-7e995667518b/containers/kube-proxy/0a8c0553,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/kube-proxy,HostPath:/va
r/lib/kubelet/pods/d0beba0e-e9aa-44eb-ad9d-7e995667518b/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/d0beba0e-e9aa-44eb-ad9d-7e995667518b/volumes/kubernetes.io~projected/kube-api-access-kzl7p,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-proxy-bmqnw_d0beba0e-e9aa-44eb-ad9d-7e995667518b/kube-proxy/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel
-collector/interceptors.go:74" id=08d1fe54-36b1-4484-8814-c15681ee1a29 name=/runtime.v1.RuntimeService/ContainerStatus
	Oct 08 15:04:18 test-preload-105038 crio[843]: time="2025-10-08 15:04:18.362267625Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:d723a8671de3de678c19afdae8517a82c6c03ba8a3f718b7aa82732f662e213b,Verbose:false,}" file="otel-collector/interceptors.go:62" id=703f7a64-69d5-415b-b3e2-8217e3b6610b name=/runtime.v1.RuntimeService/ContainerStatus
	Oct 08 15:04:18 test-preload-105038 crio[843]: time="2025-10-08 15:04:18.362536246Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:d723a8671de3de678c19afdae8517a82c6c03ba8a3f718b7aa82732f662e213b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1759935843827440796,StartedAt:1759935843863070752,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/storage-provisioner:v5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd88dea-f911-4df2-a501-de8c7912ef32,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/tmp,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/dcd88dea-f911-4df2-a501-de8c7912ef32/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/dcd88dea-f911-4df2-a501-de8c7912ef32/containers/storage-provisioner/ff38d4d2,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/dcd88dea-f911-4df2-a501-de8c7912ef32/volumes/kubernetes.io~projected/kube-api-access-b9l96,Readonly:true,SelinuxRelabel:fa
lse,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_storage-provisioner_dcd88dea-f911-4df2-a501-de8c7912ef32/storage-provisioner/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=703f7a64-69d5-415b-b3e2-8217e3b6610b name=/runtime.v1.RuntimeService/ContainerStatus
	Oct 08 15:04:18 test-preload-105038 crio[843]: time="2025-10-08 15:04:18.364290269Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:4de31bee72fe920855865fea1d1a057959546251373f157be0de875988ace17e,Verbose:false,}" file="otel-collector/interceptors.go:62" id=7f1a4d71-6303-456f-8e1c-85d1ffe7adeb name=/runtime.v1.RuntimeService/ContainerStatus
	Oct 08 15:04:18 test-preload-105038 crio[843]: time="2025-10-08 15:04:18.364421574Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:4de31bee72fe920855865fea1d1a057959546251373f157be0de875988ace17e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1759935840452930589,StartedAt:1759935840582211771,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.16-0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-105038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dbd0f209e8d2bb79dc4cc2179a2f959,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.term
inationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/9dbd0f209e8d2bb79dc4cc2179a2f959/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/9dbd0f209e8d2bb79dc4cc2179a2f959/containers/etcd/15eb5701,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_e
tcd-test-preload-105038_9dbd0f209e8d2bb79dc4cc2179a2f959/etcd/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=7f1a4d71-6303-456f-8e1c-85d1ffe7adeb name=/runtime.v1.RuntimeService/ContainerStatus
	Oct 08 15:04:18 test-preload-105038 crio[843]: time="2025-10-08 15:04:18.365641855Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:34cd7d1502b622420121562d1ec501c828b6042259c7c7b27ac0147ead3db3fe,Verbose:false,}" file="otel-collector/interceptors.go:62" id=1e39211e-2f85-423d-ba05-a27bb50ca9f1 name=/runtime.v1.RuntimeService/ContainerStatus
	Oct 08 15:04:18 test-preload-105038 crio[843]: time="2025-10-08 15:04:18.365797640Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:34cd7d1502b622420121562d1ec501c828b6042259c7c7b27ac0147ead3db3fe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1759935840357731989,StartedAt:1759935840481100999,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.32.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-105038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36daf82cbadb402e6661bce291bf08f3,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/36daf82cbadb402e6661bce291bf08f3/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/36daf82cbadb402e6661bce291bf08f3/containers/kube-controller-manager/beb8502d,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRI
VATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-controller-manager-test-preload-105038_36daf82cbadb402e6661bce291bf08f3/kube-controller-manager/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:204,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,C
pusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=1e39211e-2f85-423d-ba05-a27bb50ca9f1 name=/runtime.v1.RuntimeService/ContainerStatus
	Oct 08 15:04:18 test-preload-105038 crio[843]: time="2025-10-08 15:04:18.366344084Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:6bd36e0608a36f5b322b9dc62dbffb8471a3337b1002b56d37218683fc9be206,Verbose:false,}" file="otel-collector/interceptors.go:62" id=849b5a72-c687-4e43-b927-aad6bb4a050c name=/runtime.v1.RuntimeService/ContainerStatus
	Oct 08 15:04:18 test-preload-105038 crio[843]: time="2025-10-08 15:04:18.366459936Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:6bd36e0608a36f5b322b9dc62dbffb8471a3337b1002b56d37218683fc9be206,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1759935840314866680,StartedAt:1759935840422214079,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.32.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-105038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0895485d8ab189043f1cfc9b0c38f6fb,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/0895485d8ab189043f1cfc9b0c38f6fb/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/0895485d8ab189043f1cfc9b0c38f6fb/containers/kube-apiserver/6c508ccf,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{Con
tainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-test-preload-105038_0895485d8ab189043f1cfc9b0c38f6fb/kube-apiserver/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=849b5a72-c687-4e43-b927-aad6bb4a050c name=/runtime.v1.RuntimeService/ContainerStatus
	Oct 08 15:04:18 test-preload-105038 crio[843]: time="2025-10-08 15:04:18.367400443Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:e1793d1c0c2ec762b4da1776c0b9ac233958c44d3842a902c6bc9bc85e128088,Verbose:false,}" file="otel-collector/interceptors.go:62" id=67066a07-dd1b-48eb-aa4c-e241b7909181 name=/runtime.v1.RuntimeService/ContainerStatus
	Oct 08 15:04:18 test-preload-105038 crio[843]: time="2025-10-08 15:04:18.367506554Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:e1793d1c0c2ec762b4da1776c0b9ac233958c44d3842a902c6bc9bc85e128088,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1759935840287048370,StartedAt:1759935840459170445,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.32.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-105038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1866b584bb5e116f590a9cda1e25628,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/d1866b584bb5e116f590a9cda1e25628/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/d1866b584bb5e116f590a9cda1e25628/containers/kube-scheduler/e4fd1f20,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-test-preload-105038_d1866b584bb5e116f590a9cda1e25628/kube-scheduler/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources
{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=67066a07-dd1b-48eb-aa4c-e241b7909181 name=/runtime.v1.RuntimeService/ContainerStatus
	Oct 08 15:04:18 test-preload-105038 crio[843]: time="2025-10-08 15:04:18.384648949Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9d131363-29d7-4824-af38-ea7d8871afe6 name=/runtime.v1.RuntimeService/Version
	Oct 08 15:04:18 test-preload-105038 crio[843]: time="2025-10-08 15:04:18.384737422Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9d131363-29d7-4824-af38-ea7d8871afe6 name=/runtime.v1.RuntimeService/Version
	Oct 08 15:04:18 test-preload-105038 crio[843]: time="2025-10-08 15:04:18.385901593Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=77effafa-3950-4de0-9b42-1d47919273dd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 15:04:18 test-preload-105038 crio[843]: time="2025-10-08 15:04:18.386313457Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759935858386291216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=77effafa-3950-4de0-9b42-1d47919273dd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 08 15:04:18 test-preload-105038 crio[843]: time="2025-10-08 15:04:18.386933182Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=557ae273-e35b-40c7-b132-d14f9a20953f name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 15:04:18 test-preload-105038 crio[843]: time="2025-10-08 15:04:18.387043858Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=557ae273-e35b-40c7-b132-d14f9a20953f name=/runtime.v1.RuntimeService/ListContainers
	Oct 08 15:04:18 test-preload-105038 crio[843]: time="2025-10-08 15:04:18.387237758Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:387ca45d910626dc0da2785e711d3f3c38963b8619b781d9d0c3c0e6202280c6,PodSandboxId:18b72b8932b0f875ba65061a900160782b36d2db137f5e9b499fb65748b4212c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1759935847371395869,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-dfvqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adc53a59-d20c-41bc-a93c-43cbc2178943,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc428270803c64298f2d180627d95e321113e245027d94405880fc6fe93aabca,PodSandboxId:5f0f1ca84c63b6a94beaedb41fcd6ae79bb168007fb40a31a6920a4392491396,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1759935843766667797,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmqnw,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d0beba0e-e9aa-44eb-ad9d-7e995667518b,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d723a8671de3de678c19afdae8517a82c6c03ba8a3f718b7aa82732f662e213b,PodSandboxId:c6c42588f0a21111dafa3eb39c5e5ab6ec100a51528c027282dbd39bf2e03acd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759935843787200689,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc
d88dea-f911-4df2-a501-de8c7912ef32,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4de31bee72fe920855865fea1d1a057959546251373f157be0de875988ace17e,PodSandboxId:347e61bd6a7b034a9f457b0220116ca8f17e2cb4e63951005700e7da8971edbf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1759935840262656124,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-105038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dbd0f209e8d2bb79dc4cc2179a2f959,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34cd7d1502b622420121562d1ec501c828b6042259c7c7b27ac0147ead3db3fe,PodSandboxId:da0eea060b28a47daec12c3aa1ef3b608f7b0681d21098e3fd0d0bf58d12c399,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1759935840250561599,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-105038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36daf82cbadb402e6661bce
291bf08f3,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd36e0608a36f5b322b9dc62dbffb8471a3337b1002b56d37218683fc9be206,PodSandboxId:abd283dc2dd344096c8e8768005233215b510550d7898173a2bea9b199d50459,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1759935840249183620,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-105038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0895485d8ab189043f1cfc9b0c38f6fb,}
,Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1793d1c0c2ec762b4da1776c0b9ac233958c44d3842a902c6bc9bc85e128088,PodSandboxId:f45d9cdd496d77236fca6faed8526d794b67abcef0fa64141a9ff15ee555d709,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1759935840205542525,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-105038,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1866b584bb5e116f590a9cda1e25628,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=557ae273-e35b-40c7-b132-d14f9a20953f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	387ca45d91062       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   11 seconds ago      Running             coredns                   1                   18b72b8932b0f       coredns-668d6bf9bc-dfvqn
	d723a8671de3d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       2                   c6c42588f0a21       storage-provisioner
	cc428270803c6       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   14 seconds ago      Running             kube-proxy                1                   5f0f1ca84c63b       kube-proxy-bmqnw
	4de31bee72fe9       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   18 seconds ago      Running             etcd                      1                   347e61bd6a7b0       etcd-test-preload-105038
	34cd7d1502b62       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   18 seconds ago      Running             kube-controller-manager   1                   da0eea060b28a       kube-controller-manager-test-preload-105038
	6bd36e0608a36       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   18 seconds ago      Running             kube-apiserver            1                   abd283dc2dd34       kube-apiserver-test-preload-105038
	e1793d1c0c2ec       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   18 seconds ago      Running             kube-scheduler            1                   f45d9cdd496d7       kube-scheduler-test-preload-105038
	
	
	==> coredns [387ca45d910626dc0da2785e711d3f3c38963b8619b781d9d0c3c0e6202280c6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:37385 - 50437 "HINFO IN 7023156151203387548.2895017251923985870. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.028903739s
	
	
	==> describe nodes <==
	Name:               test-preload-105038
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-105038
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555
	                    minikube.k8s.io/name=test-preload-105038
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_08T15_02_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 08 Oct 2025 15:02:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-105038
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 08 Oct 2025 15:04:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Oct 2025 15:04:04 +0000   Wed, 08 Oct 2025 15:02:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Oct 2025 15:04:04 +0000   Wed, 08 Oct 2025 15:02:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Oct 2025 15:04:04 +0000   Wed, 08 Oct 2025 15:02:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 08 Oct 2025 15:04:04 +0000   Wed, 08 Oct 2025 15:04:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.121
	  Hostname:    test-preload-105038
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042708Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042708Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd7f5ae2f1df4ff8a62b997418920180
	  System UUID:                cd7f5ae2-f1df-4ff8-a62b-997418920180
	  Boot ID:                    9dcd42a7-5e82-40d4-90ab-aba430b632ab
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-dfvqn                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     105s
	  kube-system                 etcd-test-preload-105038                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         110s
	  kube-system                 kube-apiserver-test-preload-105038             250m (12%)    0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-test-preload-105038    200m (10%)    0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-bmqnw                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-test-preload-105038             100m (5%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 14s                kube-proxy       
	  Normal   Starting                 103s               kube-proxy       
	  Normal   NodeHasSufficientPID     110s               kubelet          Node test-preload-105038 status is now: NodeHasSufficientPID
	  Normal   Starting                 110s               kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  110s               kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  110s               kubelet          Node test-preload-105038 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    110s               kubelet          Node test-preload-105038 status is now: NodeHasNoDiskPressure
	  Normal   NodeReady                109s               kubelet          Node test-preload-105038 status is now: NodeReady
	  Normal   RegisteredNode           106s               node-controller  Node test-preload-105038 event: Registered Node test-preload-105038 in Controller
	  Normal   CIDRAssignmentFailed     106s               cidrAllocator    Node test-preload-105038 status is now: CIDRAssignmentFailed
	  Normal   Starting                 20s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  20s (x8 over 20s)  kubelet          Node test-preload-105038 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20s (x8 over 20s)  kubelet          Node test-preload-105038 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     20s (x7 over 20s)  kubelet          Node test-preload-105038 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  20s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 15s                kubelet          Node test-preload-105038 has been rebooted, boot id: 9dcd42a7-5e82-40d4-90ab-aba430b632ab
	  Normal   RegisteredNode           12s                node-controller  Node test-preload-105038 event: Registered Node test-preload-105038 in Controller
	
	
	==> dmesg <==
	[Oct 8 15:03] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000095] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.004430] (rpcbind)[117]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.960293] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000022] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.087380] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.094652] kauditd_printk_skb: 102 callbacks suppressed
	[Oct 8 15:04] kauditd_printk_skb: 177 callbacks suppressed
	[  +8.184431] kauditd_printk_skb: 197 callbacks suppressed
	
	
	==> etcd [4de31bee72fe920855865fea1d1a057959546251373f157be0de875988ace17e] <==
	{"level":"info","ts":"2025-10-08T15:04:00.693051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cbdf275f553df7c2 switched to configuration voters=(14690503799911348162)"}
	{"level":"info","ts":"2025-10-08T15:04:00.696972Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","added-peer-id":"cbdf275f553df7c2","added-peer-peer-urls":["https://192.168.39.121:2380"]}
	{"level":"info","ts":"2025-10-08T15:04:00.697102Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-08T15:04:00.697142Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-08T15:04:00.700491Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-08T15:04:00.700867Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"cbdf275f553df7c2","initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-08T15:04:00.702640Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-08T15:04:00.702840Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.121:2380"}
	{"level":"info","ts":"2025-10-08T15:04:00.705274Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.121:2380"}
	{"level":"info","ts":"2025-10-08T15:04:01.965129Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cbdf275f553df7c2 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-08T15:04:01.965171Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cbdf275f553df7c2 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-08T15:04:01.965224Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cbdf275f553df7c2 received MsgPreVoteResp from cbdf275f553df7c2 at term 2"}
	{"level":"info","ts":"2025-10-08T15:04:01.965239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cbdf275f553df7c2 became candidate at term 3"}
	{"level":"info","ts":"2025-10-08T15:04:01.965245Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cbdf275f553df7c2 received MsgVoteResp from cbdf275f553df7c2 at term 3"}
	{"level":"info","ts":"2025-10-08T15:04:01.965252Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cbdf275f553df7c2 became leader at term 3"}
	{"level":"info","ts":"2025-10-08T15:04:01.965259Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: cbdf275f553df7c2 elected leader cbdf275f553df7c2 at term 3"}
	{"level":"info","ts":"2025-10-08T15:04:01.967072Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"cbdf275f553df7c2","local-member-attributes":"{Name:test-preload-105038 ClientURLs:[https://192.168.39.121:2379]}","request-path":"/0/members/cbdf275f553df7c2/attributes","cluster-id":"6f38b6947d3f1f22","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-08T15:04:01.967241Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-08T15:04:01.967319Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-08T15:04:01.967757Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-08T15:04:01.967797Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-08T15:04:01.968239Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-08T15:04:01.968270Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-08T15:04:01.968948Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-08T15:04:01.968949Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.121:2379"}
	
	
	==> kernel <==
	 15:04:18 up 0 min,  0 users,  load average: 0.29, 0.09, 0.03
	Linux test-preload-105038 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [6bd36e0608a36f5b322b9dc62dbffb8471a3337b1002b56d37218683fc9be206] <==
	I1008 15:04:03.176040       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1008 15:04:03.176341       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1008 15:04:03.178149       1 shared_informer.go:320] Caches are synced for configmaps
	I1008 15:04:03.176105       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1008 15:04:03.184583       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1008 15:04:03.185063       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1008 15:04:03.209579       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1008 15:04:03.224446       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1008 15:04:03.227733       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1008 15:04:03.227775       1 aggregator.go:171] initial CRD sync complete...
	I1008 15:04:03.227790       1 autoregister_controller.go:144] Starting autoregister controller
	I1008 15:04:03.227796       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1008 15:04:03.227801       1 cache.go:39] Caches are synced for autoregister controller
	I1008 15:04:03.230240       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1008 15:04:03.230340       1 policy_source.go:240] refreshing policies
	I1008 15:04:03.274560       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1008 15:04:03.389426       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1008 15:04:04.082988       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1008 15:04:04.594953       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1008 15:04:04.632457       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1008 15:04:04.670553       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1008 15:04:04.679386       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1008 15:04:06.457798       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1008 15:04:06.553448       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1008 15:04:06.802029       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [34cd7d1502b622420121562d1ec501c828b6042259c7c7b27ac0147ead3db3fe] <==
	I1008 15:04:06.399012       1 shared_informer.go:320] Caches are synced for TTL
	I1008 15:04:06.400742       1 shared_informer.go:320] Caches are synced for crt configmap
	I1008 15:04:06.400825       1 shared_informer.go:320] Caches are synced for expand
	I1008 15:04:06.403084       1 shared_informer.go:320] Caches are synced for service account
	I1008 15:04:06.404254       1 shared_informer.go:320] Caches are synced for resource quota
	I1008 15:04:06.404293       1 shared_informer.go:320] Caches are synced for cronjob
	I1008 15:04:06.408481       1 shared_informer.go:320] Caches are synced for node
	I1008 15:04:06.408550       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1008 15:04:06.408574       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1008 15:04:06.408579       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1008 15:04:06.408584       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1008 15:04:06.408741       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-105038"
	I1008 15:04:06.412347       1 shared_informer.go:320] Caches are synced for endpoint
	I1008 15:04:06.417961       1 shared_informer.go:320] Caches are synced for resource quota
	I1008 15:04:06.422381       1 shared_informer.go:320] Caches are synced for garbage collector
	I1008 15:04:06.441624       1 shared_informer.go:320] Caches are synced for namespace
	I1008 15:04:06.442975       1 shared_informer.go:320] Caches are synced for garbage collector
	I1008 15:04:06.443010       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1008 15:04:06.443018       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1008 15:04:06.451572       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1008 15:04:06.466795       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="114.5002ms"
	I1008 15:04:06.467078       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="230.627µs"
	I1008 15:04:07.484529       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="110.338µs"
	I1008 15:04:15.391473       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="17.046863ms"
	I1008 15:04:15.391946       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="225.371µs"
	
	
	==> kube-proxy [cc428270803c64298f2d180627d95e321113e245027d94405880fc6fe93aabca] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1008 15:04:03.998387       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1008 15:04:04.008146       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.121"]
	E1008 15:04:04.008315       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 15:04:04.046209       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1008 15:04:04.046241       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1008 15:04:04.046264       1 server_linux.go:170] "Using iptables Proxier"
	I1008 15:04:04.049101       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 15:04:04.049442       1 server.go:497] "Version info" version="v1.32.0"
	I1008 15:04:04.049470       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 15:04:04.051144       1 config.go:199] "Starting service config controller"
	I1008 15:04:04.051192       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1008 15:04:04.051220       1 config.go:105] "Starting endpoint slice config controller"
	I1008 15:04:04.051225       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1008 15:04:04.054646       1 config.go:329] "Starting node config controller"
	I1008 15:04:04.054673       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1008 15:04:04.152492       1 shared_informer.go:320] Caches are synced for service config
	I1008 15:04:04.152849       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1008 15:04:04.155517       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e1793d1c0c2ec762b4da1776c0b9ac233958c44d3842a902c6bc9bc85e128088] <==
	I1008 15:04:01.212892       1 serving.go:386] Generated self-signed cert in-memory
	W1008 15:04:03.144855       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1008 15:04:03.144999       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1008 15:04:03.145026       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1008 15:04:03.145048       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1008 15:04:03.185361       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1008 15:04:03.185402       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 15:04:03.192247       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 15:04:03.192348       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1008 15:04:03.192357       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1008 15:04:03.192520       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1008 15:04:03.295908       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 08 15:04:03 test-preload-105038 kubelet[1168]: I1008 15:04:03.361988    1168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d0beba0e-e9aa-44eb-ad9d-7e995667518b-lib-modules\") pod \"kube-proxy-bmqnw\" (UID: \"d0beba0e-e9aa-44eb-ad9d-7e995667518b\") " pod="kube-system/kube-proxy-bmqnw"
	Oct 08 15:04:03 test-preload-105038 kubelet[1168]: I1008 15:04:03.362017    1168 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/dcd88dea-f911-4df2-a501-de8c7912ef32-tmp\") pod \"storage-provisioner\" (UID: \"dcd88dea-f911-4df2-a501-de8c7912ef32\") " pod="kube-system/storage-provisioner"
	Oct 08 15:04:03 test-preload-105038 kubelet[1168]: E1008 15:04:03.362806    1168 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 08 15:04:03 test-preload-105038 kubelet[1168]: E1008 15:04:03.362903    1168 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/adc53a59-d20c-41bc-a93c-43cbc2178943-config-volume podName:adc53a59-d20c-41bc-a93c-43cbc2178943 nodeName:}" failed. No retries permitted until 2025-10-08 15:04:03.86287265 +0000 UTC m=+5.686940770 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/adc53a59-d20c-41bc-a93c-43cbc2178943-config-volume") pod "coredns-668d6bf9bc-dfvqn" (UID: "adc53a59-d20c-41bc-a93c-43cbc2178943") : object "kube-system"/"coredns" not registered
	Oct 08 15:04:03 test-preload-105038 kubelet[1168]: E1008 15:04:03.417119    1168 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-105038\" already exists" pod="kube-system/kube-controller-manager-test-preload-105038"
	Oct 08 15:04:03 test-preload-105038 kubelet[1168]: I1008 15:04:03.417148    1168 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-105038"
	Oct 08 15:04:03 test-preload-105038 kubelet[1168]: I1008 15:04:03.430515    1168 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-105038"
	Oct 08 15:04:03 test-preload-105038 kubelet[1168]: I1008 15:04:03.430913    1168 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-test-preload-105038"
	Oct 08 15:04:03 test-preload-105038 kubelet[1168]: I1008 15:04:03.431157    1168 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-105038"
	Oct 08 15:04:03 test-preload-105038 kubelet[1168]: E1008 15:04:03.460438    1168 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-105038\" already exists" pod="kube-system/kube-apiserver-test-preload-105038"
	Oct 08 15:04:03 test-preload-105038 kubelet[1168]: E1008 15:04:03.464547    1168 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-105038\" already exists" pod="kube-system/kube-scheduler-test-preload-105038"
	Oct 08 15:04:03 test-preload-105038 kubelet[1168]: E1008 15:04:03.465133    1168 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-105038\" already exists" pod="kube-system/kube-scheduler-test-preload-105038"
	Oct 08 15:04:03 test-preload-105038 kubelet[1168]: E1008 15:04:03.465680    1168 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-105038\" already exists" pod="kube-system/etcd-test-preload-105038"
	Oct 08 15:04:03 test-preload-105038 kubelet[1168]: I1008 15:04:03.467651    1168 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-105038"
	Oct 08 15:04:03 test-preload-105038 kubelet[1168]: E1008 15:04:03.484868    1168 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-105038\" already exists" pod="kube-system/etcd-test-preload-105038"
	Oct 08 15:04:03 test-preload-105038 kubelet[1168]: E1008 15:04:03.866269    1168 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 08 15:04:03 test-preload-105038 kubelet[1168]: E1008 15:04:03.866339    1168 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/adc53a59-d20c-41bc-a93c-43cbc2178943-config-volume podName:adc53a59-d20c-41bc-a93c-43cbc2178943 nodeName:}" failed. No retries permitted until 2025-10-08 15:04:04.866325766 +0000 UTC m=+6.690393866 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/adc53a59-d20c-41bc-a93c-43cbc2178943-config-volume") pod "coredns-668d6bf9bc-dfvqn" (UID: "adc53a59-d20c-41bc-a93c-43cbc2178943") : object "kube-system"/"coredns" not registered
	Oct 08 15:04:04 test-preload-105038 kubelet[1168]: E1008 15:04:04.873991    1168 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 08 15:04:04 test-preload-105038 kubelet[1168]: E1008 15:04:04.874077    1168 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/adc53a59-d20c-41bc-a93c-43cbc2178943-config-volume podName:adc53a59-d20c-41bc-a93c-43cbc2178943 nodeName:}" failed. No retries permitted until 2025-10-08 15:04:06.874061939 +0000 UTC m=+8.698130050 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/adc53a59-d20c-41bc-a93c-43cbc2178943-config-volume") pod "coredns-668d6bf9bc-dfvqn" (UID: "adc53a59-d20c-41bc-a93c-43cbc2178943") : object "kube-system"/"coredns" not registered
	Oct 08 15:04:04 test-preload-105038 kubelet[1168]: I1008 15:04:04.881009    1168 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	Oct 08 15:04:08 test-preload-105038 kubelet[1168]: E1008 15:04:08.348550    1168 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759935848348263203,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 15:04:08 test-preload-105038 kubelet[1168]: E1008 15:04:08.348660    1168 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759935848348263203,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 15:04:15 test-preload-105038 kubelet[1168]: I1008 15:04:15.355819    1168 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 08 15:04:18 test-preload-105038 kubelet[1168]: E1008 15:04:18.353160    1168 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759935858350814862,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 08 15:04:18 test-preload-105038 kubelet[1168]: E1008 15:04:18.353200    1168 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759935858350814862,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [d723a8671de3de678c19afdae8517a82c6c03ba8a3f718b7aa82732f662e213b] <==
	I1008 15:04:03.921047       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-105038 -n test-preload-105038
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-105038 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-105038" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-105038
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-105038: (1.053385054s)
--- FAIL: TestPreload (162.98s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (80.37s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-783785 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1008 15:11:36.606062  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/functional-882741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-783785 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m15.514808066s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-783785] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21681
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21681-357044/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-357044/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-783785" primary control-plane node in "pause-783785" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-783785" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 15:11:11.806437  401545 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:11:11.806966  401545 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:11:11.806979  401545 out.go:374] Setting ErrFile to fd 2...
	I1008 15:11:11.806984  401545 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:11:11.807171  401545 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-357044/.minikube/bin
	I1008 15:11:11.807717  401545 out.go:368] Setting JSON to false
	I1008 15:11:11.808731  401545 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6804,"bootTime":1759929468,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 15:11:11.808832  401545 start.go:141] virtualization: kvm guest
	I1008 15:11:11.810651  401545 out.go:179] * [pause-783785] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 15:11:11.812014  401545 notify.go:220] Checking for updates...
	I1008 15:11:11.812050  401545 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 15:11:11.813260  401545 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 15:11:11.814488  401545 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-357044/kubeconfig
	I1008 15:11:11.815538  401545 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-357044/.minikube
	I1008 15:11:11.819584  401545 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 15:11:11.820723  401545 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 15:11:11.822216  401545 config.go:182] Loaded profile config "pause-783785": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:11:11.822645  401545 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 15:11:11.822725  401545 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 15:11:11.837175  401545 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43859
	I1008 15:11:11.837774  401545 main.go:141] libmachine: () Calling .GetVersion
	I1008 15:11:11.838394  401545 main.go:141] libmachine: Using API Version  1
	I1008 15:11:11.838421  401545 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 15:11:11.838802  401545 main.go:141] libmachine: () Calling .GetMachineName
	I1008 15:11:11.839021  401545 main.go:141] libmachine: (pause-783785) Calling .DriverName
	I1008 15:11:11.839302  401545 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 15:11:11.839774  401545 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 15:11:11.839818  401545 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 15:11:11.854334  401545 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33995
	I1008 15:11:11.855000  401545 main.go:141] libmachine: () Calling .GetVersion
	I1008 15:11:11.855604  401545 main.go:141] libmachine: Using API Version  1
	I1008 15:11:11.855634  401545 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 15:11:11.856020  401545 main.go:141] libmachine: () Calling .GetMachineName
	I1008 15:11:11.856235  401545 main.go:141] libmachine: (pause-783785) Calling .DriverName
	I1008 15:11:11.892014  401545 out.go:179] * Using the kvm2 driver based on existing profile
	I1008 15:11:11.893198  401545 start.go:305] selected driver: kvm2
	I1008 15:11:11.893218  401545 start.go:925] validating driver "kvm2" against &{Name:pause-783785 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.1 ClusterName:pause-783785 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-install
er:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:11:11.893433  401545 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 15:11:11.893903  401545 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 15:11:11.894008  401545 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21681-357044/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 15:11:11.910374  401545 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1008 15:11:11.910437  401545 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21681-357044/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 15:11:11.927411  401545 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1008 15:11:11.928631  401545 cni.go:84] Creating CNI manager for ""
	I1008 15:11:11.928716  401545 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 15:11:11.928807  401545 start.go:349] cluster config:
	{Name:pause-783785 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-783785 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false po
rtainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:11:11.928993  401545 iso.go:125] acquiring lock: {Name:mkaa45da6237a5a16f5f1d676ea2e57ba969b9e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 15:11:11.930735  401545 out.go:179] * Starting "pause-783785" primary control-plane node in "pause-783785" cluster
	I1008 15:11:11.931875  401545 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:11:11.931927  401545 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-357044/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1008 15:11:11.931938  401545 cache.go:58] Caching tarball of preloaded images
	I1008 15:11:11.932055  401545 preload.go:233] Found /home/jenkins/minikube-integration/21681-357044/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 15:11:11.932068  401545 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 15:11:11.932226  401545 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/pause-783785/config.json ...
	I1008 15:11:11.932513  401545 start.go:360] acquireMachinesLock for pause-783785: {Name:mka12a7774d0aa7dccf7190e47a0dc3a854191d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 15:11:45.988927  401545 start.go:364] duration metric: took 34.056357675s to acquireMachinesLock for "pause-783785"
	I1008 15:11:45.988992  401545 start.go:96] Skipping create...Using existing machine configuration
	I1008 15:11:45.989000  401545 fix.go:54] fixHost starting: 
	I1008 15:11:45.989489  401545 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 15:11:45.989551  401545 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 15:11:46.011518  401545 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35619
	I1008 15:11:46.012265  401545 main.go:141] libmachine: () Calling .GetVersion
	I1008 15:11:46.012947  401545 main.go:141] libmachine: Using API Version  1
	I1008 15:11:46.013002  401545 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 15:11:46.013546  401545 main.go:141] libmachine: () Calling .GetMachineName
	I1008 15:11:46.013851  401545 main.go:141] libmachine: (pause-783785) Calling .DriverName
	I1008 15:11:46.014028  401545 main.go:141] libmachine: (pause-783785) Calling .GetState
	I1008 15:11:46.017284  401545 fix.go:112] recreateIfNeeded on pause-783785: state=Running err=<nil>
	W1008 15:11:46.017315  401545 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 15:11:46.020643  401545 out.go:252] * Updating the running kvm2 "pause-783785" VM ...
	I1008 15:11:46.020695  401545 machine.go:93] provisionDockerMachine start ...
	I1008 15:11:46.020719  401545 main.go:141] libmachine: (pause-783785) Calling .DriverName
	I1008 15:11:46.021055  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHHostname
	I1008 15:11:46.026008  401545 main.go:141] libmachine: (pause-783785) DBG | domain pause-783785 has defined MAC address 52:54:00:df:a5:81 in network mk-pause-783785
	I1008 15:11:46.026530  401545 main.go:141] libmachine: (pause-783785) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:a5:81", ip: ""} in network mk-pause-783785: {Iface:virbr1 ExpiryTime:2025-10-08 16:09:59 +0000 UTC Type:0 Mac:52:54:00:df:a5:81 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:pause-783785 Clientid:01:52:54:00:df:a5:81}
	I1008 15:11:46.026639  401545 main.go:141] libmachine: (pause-783785) DBG | domain pause-783785 has defined IP address 192.168.39.7 and MAC address 52:54:00:df:a5:81 in network mk-pause-783785
	I1008 15:11:46.027231  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHPort
	I1008 15:11:46.027683  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHKeyPath
	I1008 15:11:46.027912  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHKeyPath
	I1008 15:11:46.028146  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHUsername
	I1008 15:11:46.028369  401545 main.go:141] libmachine: Using SSH client type: native
	I1008 15:11:46.028684  401545 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I1008 15:11:46.028698  401545 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 15:11:46.173907  401545 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-783785
	
	I1008 15:11:46.173950  401545 main.go:141] libmachine: (pause-783785) Calling .GetMachineName
	I1008 15:11:46.176150  401545 buildroot.go:166] provisioning hostname "pause-783785"
	I1008 15:11:46.176249  401545 main.go:141] libmachine: (pause-783785) Calling .GetMachineName
	I1008 15:11:46.176537  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHHostname
	I1008 15:11:46.183254  401545 main.go:141] libmachine: (pause-783785) DBG | domain pause-783785 has defined MAC address 52:54:00:df:a5:81 in network mk-pause-783785
	I1008 15:11:46.183958  401545 main.go:141] libmachine: (pause-783785) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:a5:81", ip: ""} in network mk-pause-783785: {Iface:virbr1 ExpiryTime:2025-10-08 16:09:59 +0000 UTC Type:0 Mac:52:54:00:df:a5:81 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:pause-783785 Clientid:01:52:54:00:df:a5:81}
	I1008 15:11:46.184104  401545 main.go:141] libmachine: (pause-783785) DBG | domain pause-783785 has defined IP address 192.168.39.7 and MAC address 52:54:00:df:a5:81 in network mk-pause-783785
	I1008 15:11:46.184744  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHPort
	I1008 15:11:46.184986  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHKeyPath
	I1008 15:11:46.185208  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHKeyPath
	I1008 15:11:46.185445  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHUsername
	I1008 15:11:46.185699  401545 main.go:141] libmachine: Using SSH client type: native
	I1008 15:11:46.186024  401545 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I1008 15:11:46.186037  401545 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-783785 && echo "pause-783785" | sudo tee /etc/hostname
	I1008 15:11:46.407865  401545 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-783785
	
	I1008 15:11:46.407908  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHHostname
	I1008 15:11:46.415709  401545 main.go:141] libmachine: (pause-783785) DBG | domain pause-783785 has defined MAC address 52:54:00:df:a5:81 in network mk-pause-783785
	I1008 15:11:46.416150  401545 main.go:141] libmachine: (pause-783785) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:a5:81", ip: ""} in network mk-pause-783785: {Iface:virbr1 ExpiryTime:2025-10-08 16:09:59 +0000 UTC Type:0 Mac:52:54:00:df:a5:81 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:pause-783785 Clientid:01:52:54:00:df:a5:81}
	I1008 15:11:46.416329  401545 main.go:141] libmachine: (pause-783785) DBG | domain pause-783785 has defined IP address 192.168.39.7 and MAC address 52:54:00:df:a5:81 in network mk-pause-783785
	I1008 15:11:46.416857  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHPort
	I1008 15:11:46.417493  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHKeyPath
	I1008 15:11:46.418052  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHKeyPath
	I1008 15:11:46.418262  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHUsername
	I1008 15:11:46.418659  401545 main.go:141] libmachine: Using SSH client type: native
	I1008 15:11:46.418978  401545 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I1008 15:11:46.419004  401545 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-783785' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-783785/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-783785' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 15:11:46.565177  401545 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 15:11:46.565218  401545 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21681-357044/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-357044/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-357044/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-357044/.minikube}
	I1008 15:11:46.565246  401545 buildroot.go:174] setting up certificates
	I1008 15:11:46.565259  401545 provision.go:84] configureAuth start
	I1008 15:11:46.565272  401545 main.go:141] libmachine: (pause-783785) Calling .GetMachineName
	I1008 15:11:46.566720  401545 main.go:141] libmachine: (pause-783785) Calling .GetIP
	I1008 15:11:46.570901  401545 main.go:141] libmachine: (pause-783785) DBG | domain pause-783785 has defined MAC address 52:54:00:df:a5:81 in network mk-pause-783785
	I1008 15:11:46.571409  401545 main.go:141] libmachine: (pause-783785) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:a5:81", ip: ""} in network mk-pause-783785: {Iface:virbr1 ExpiryTime:2025-10-08 16:09:59 +0000 UTC Type:0 Mac:52:54:00:df:a5:81 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:pause-783785 Clientid:01:52:54:00:df:a5:81}
	I1008 15:11:46.571485  401545 main.go:141] libmachine: (pause-783785) DBG | domain pause-783785 has defined IP address 192.168.39.7 and MAC address 52:54:00:df:a5:81 in network mk-pause-783785
	I1008 15:11:46.571867  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHHostname
	I1008 15:11:46.575862  401545 main.go:141] libmachine: (pause-783785) DBG | domain pause-783785 has defined MAC address 52:54:00:df:a5:81 in network mk-pause-783785
	I1008 15:11:46.576394  401545 main.go:141] libmachine: (pause-783785) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:a5:81", ip: ""} in network mk-pause-783785: {Iface:virbr1 ExpiryTime:2025-10-08 16:09:59 +0000 UTC Type:0 Mac:52:54:00:df:a5:81 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:pause-783785 Clientid:01:52:54:00:df:a5:81}
	I1008 15:11:46.576514  401545 main.go:141] libmachine: (pause-783785) DBG | domain pause-783785 has defined IP address 192.168.39.7 and MAC address 52:54:00:df:a5:81 in network mk-pause-783785
	I1008 15:11:46.577085  401545 provision.go:143] copyHostCerts
	I1008 15:11:46.577148  401545 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-357044/.minikube/ca.pem, removing ...
	I1008 15:11:46.577159  401545 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-357044/.minikube/ca.pem
	I1008 15:11:46.577245  401545 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-357044/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-357044/.minikube/ca.pem (1082 bytes)
	I1008 15:11:46.577421  401545 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-357044/.minikube/cert.pem, removing ...
	I1008 15:11:46.577430  401545 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-357044/.minikube/cert.pem
	I1008 15:11:46.577475  401545 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-357044/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-357044/.minikube/cert.pem (1123 bytes)
	I1008 15:11:46.577547  401545 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-357044/.minikube/key.pem, removing ...
	I1008 15:11:46.577554  401545 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-357044/.minikube/key.pem
	I1008 15:11:46.577588  401545 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-357044/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-357044/.minikube/key.pem (1675 bytes)
	I1008 15:11:46.577727  401545 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-357044/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-357044/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-357044/.minikube/certs/ca-key.pem org=jenkins.pause-783785 san=[127.0.0.1 192.168.39.7 localhost minikube pause-783785]
	I1008 15:11:46.738341  401545 provision.go:177] copyRemoteCerts
	I1008 15:11:46.738442  401545 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 15:11:46.738477  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHHostname
	I1008 15:11:46.743056  401545 main.go:141] libmachine: (pause-783785) DBG | domain pause-783785 has defined MAC address 52:54:00:df:a5:81 in network mk-pause-783785
	I1008 15:11:46.743605  401545 main.go:141] libmachine: (pause-783785) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:a5:81", ip: ""} in network mk-pause-783785: {Iface:virbr1 ExpiryTime:2025-10-08 16:09:59 +0000 UTC Type:0 Mac:52:54:00:df:a5:81 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:pause-783785 Clientid:01:52:54:00:df:a5:81}
	I1008 15:11:46.743687  401545 main.go:141] libmachine: (pause-783785) DBG | domain pause-783785 has defined IP address 192.168.39.7 and MAC address 52:54:00:df:a5:81 in network mk-pause-783785
	I1008 15:11:46.744147  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHPort
	I1008 15:11:46.744445  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHKeyPath
	I1008 15:11:46.744636  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHUsername
	I1008 15:11:46.744796  401545 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/pause-783785/id_rsa Username:docker}
	I1008 15:11:46.852080  401545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 15:11:46.906924  401545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1008 15:11:46.985554  401545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 15:11:47.042295  401545 provision.go:87] duration metric: took 477.017563ms to configureAuth
	I1008 15:11:47.042346  401545 buildroot.go:189] setting minikube options for container-runtime
	I1008 15:11:47.042781  401545 config.go:182] Loaded profile config "pause-783785": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:11:47.042910  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHHostname
	I1008 15:11:47.048268  401545 main.go:141] libmachine: (pause-783785) DBG | domain pause-783785 has defined MAC address 52:54:00:df:a5:81 in network mk-pause-783785
	I1008 15:11:47.048928  401545 main.go:141] libmachine: (pause-783785) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:a5:81", ip: ""} in network mk-pause-783785: {Iface:virbr1 ExpiryTime:2025-10-08 16:09:59 +0000 UTC Type:0 Mac:52:54:00:df:a5:81 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:pause-783785 Clientid:01:52:54:00:df:a5:81}
	I1008 15:11:47.049347  401545 main.go:141] libmachine: (pause-783785) DBG | domain pause-783785 has defined IP address 192.168.39.7 and MAC address 52:54:00:df:a5:81 in network mk-pause-783785
	I1008 15:11:47.049694  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHPort
	I1008 15:11:47.050053  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHKeyPath
	I1008 15:11:47.050315  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHKeyPath
	I1008 15:11:47.050621  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHUsername
	I1008 15:11:47.050959  401545 main.go:141] libmachine: Using SSH client type: native
	I1008 15:11:47.051472  401545 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I1008 15:11:47.051512  401545 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 15:11:52.706324  401545 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 15:11:52.706364  401545 machine.go:96] duration metric: took 6.685649459s to provisionDockerMachine
	I1008 15:11:52.706380  401545 start.go:293] postStartSetup for "pause-783785" (driver="kvm2")
	I1008 15:11:52.706395  401545 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 15:11:52.706417  401545 main.go:141] libmachine: (pause-783785) Calling .DriverName
	I1008 15:11:52.706777  401545 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 15:11:52.706805  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHHostname
	I1008 15:11:52.710637  401545 main.go:141] libmachine: (pause-783785) DBG | domain pause-783785 has defined MAC address 52:54:00:df:a5:81 in network mk-pause-783785
	I1008 15:11:52.711103  401545 main.go:141] libmachine: (pause-783785) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:a5:81", ip: ""} in network mk-pause-783785: {Iface:virbr1 ExpiryTime:2025-10-08 16:09:59 +0000 UTC Type:0 Mac:52:54:00:df:a5:81 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:pause-783785 Clientid:01:52:54:00:df:a5:81}
	I1008 15:11:52.711133  401545 main.go:141] libmachine: (pause-783785) DBG | domain pause-783785 has defined IP address 192.168.39.7 and MAC address 52:54:00:df:a5:81 in network mk-pause-783785
	I1008 15:11:52.711289  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHPort
	I1008 15:11:52.711568  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHKeyPath
	I1008 15:11:52.711750  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHUsername
	I1008 15:11:52.711938  401545 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/pause-783785/id_rsa Username:docker}
	I1008 15:11:52.803452  401545 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 15:11:52.809063  401545 info.go:137] Remote host: Buildroot 2025.02
	I1008 15:11:52.809108  401545 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-357044/.minikube/addons for local assets ...
	I1008 15:11:52.809186  401545 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-357044/.minikube/files for local assets ...
	I1008 15:11:52.809301  401545 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-357044/.minikube/files/etc/ssl/certs/3619152.pem -> 3619152.pem in /etc/ssl/certs
	I1008 15:11:52.809446  401545 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 15:11:52.826465  401545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/files/etc/ssl/certs/3619152.pem --> /etc/ssl/certs/3619152.pem (1708 bytes)
	I1008 15:11:52.865687  401545 start.go:296] duration metric: took 159.286642ms for postStartSetup
	I1008 15:11:52.865743  401545 fix.go:56] duration metric: took 6.876741913s for fixHost
	I1008 15:11:52.865774  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHHostname
	I1008 15:11:52.869223  401545 main.go:141] libmachine: (pause-783785) DBG | domain pause-783785 has defined MAC address 52:54:00:df:a5:81 in network mk-pause-783785
	I1008 15:11:52.869672  401545 main.go:141] libmachine: (pause-783785) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:a5:81", ip: ""} in network mk-pause-783785: {Iface:virbr1 ExpiryTime:2025-10-08 16:09:59 +0000 UTC Type:0 Mac:52:54:00:df:a5:81 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:pause-783785 Clientid:01:52:54:00:df:a5:81}
	I1008 15:11:52.869705  401545 main.go:141] libmachine: (pause-783785) DBG | domain pause-783785 has defined IP address 192.168.39.7 and MAC address 52:54:00:df:a5:81 in network mk-pause-783785
	I1008 15:11:52.869926  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHPort
	I1008 15:11:52.870177  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHKeyPath
	I1008 15:11:52.870347  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHKeyPath
	I1008 15:11:52.870563  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHUsername
	I1008 15:11:52.870777  401545 main.go:141] libmachine: Using SSH client type: native
	I1008 15:11:52.871035  401545 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I1008 15:11:52.871048  401545 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 15:11:52.995389  401545 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759936312.990320595
	
	I1008 15:11:52.995421  401545 fix.go:216] guest clock: 1759936312.990320595
	I1008 15:11:52.995434  401545 fix.go:229] Guest: 2025-10-08 15:11:52.990320595 +0000 UTC Remote: 2025-10-08 15:11:52.865749661 +0000 UTC m=+41.105457836 (delta=124.570934ms)
	I1008 15:11:52.995489  401545 fix.go:200] guest clock delta is within tolerance: 124.570934ms
	I1008 15:11:52.995496  401545 start.go:83] releasing machines lock for "pause-783785", held for 7.006529428s
	I1008 15:11:52.995527  401545 main.go:141] libmachine: (pause-783785) Calling .DriverName
	I1008 15:11:52.995829  401545 main.go:141] libmachine: (pause-783785) Calling .GetIP
	I1008 15:11:52.999977  401545 main.go:141] libmachine: (pause-783785) DBG | domain pause-783785 has defined MAC address 52:54:00:df:a5:81 in network mk-pause-783785
	I1008 15:11:53.000505  401545 main.go:141] libmachine: (pause-783785) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:a5:81", ip: ""} in network mk-pause-783785: {Iface:virbr1 ExpiryTime:2025-10-08 16:09:59 +0000 UTC Type:0 Mac:52:54:00:df:a5:81 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:pause-783785 Clientid:01:52:54:00:df:a5:81}
	I1008 15:11:53.000537  401545 main.go:141] libmachine: (pause-783785) DBG | domain pause-783785 has defined IP address 192.168.39.7 and MAC address 52:54:00:df:a5:81 in network mk-pause-783785
	I1008 15:11:53.000785  401545 main.go:141] libmachine: (pause-783785) Calling .DriverName
	I1008 15:11:53.001413  401545 main.go:141] libmachine: (pause-783785) Calling .DriverName
	I1008 15:11:53.001633  401545 main.go:141] libmachine: (pause-783785) Calling .DriverName
	I1008 15:11:53.001752  401545 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 15:11:53.001812  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHHostname
	I1008 15:11:53.002045  401545 ssh_runner.go:195] Run: cat /version.json
	I1008 15:11:53.002078  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHHostname
	I1008 15:11:53.005897  401545 main.go:141] libmachine: (pause-783785) DBG | domain pause-783785 has defined MAC address 52:54:00:df:a5:81 in network mk-pause-783785
	I1008 15:11:53.006264  401545 main.go:141] libmachine: (pause-783785) DBG | domain pause-783785 has defined MAC address 52:54:00:df:a5:81 in network mk-pause-783785
	I1008 15:11:53.006579  401545 main.go:141] libmachine: (pause-783785) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:a5:81", ip: ""} in network mk-pause-783785: {Iface:virbr1 ExpiryTime:2025-10-08 16:09:59 +0000 UTC Type:0 Mac:52:54:00:df:a5:81 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:pause-783785 Clientid:01:52:54:00:df:a5:81}
	I1008 15:11:53.006638  401545 main.go:141] libmachine: (pause-783785) DBG | domain pause-783785 has defined IP address 192.168.39.7 and MAC address 52:54:00:df:a5:81 in network mk-pause-783785
	I1008 15:11:53.007402  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHPort
	I1008 15:11:53.007410  401545 main.go:141] libmachine: (pause-783785) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:a5:81", ip: ""} in network mk-pause-783785: {Iface:virbr1 ExpiryTime:2025-10-08 16:09:59 +0000 UTC Type:0 Mac:52:54:00:df:a5:81 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:pause-783785 Clientid:01:52:54:00:df:a5:81}
	I1008 15:11:53.007454  401545 main.go:141] libmachine: (pause-783785) DBG | domain pause-783785 has defined IP address 192.168.39.7 and MAC address 52:54:00:df:a5:81 in network mk-pause-783785
	I1008 15:11:53.007676  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHPort
	I1008 15:11:53.007710  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHKeyPath
	I1008 15:11:53.007944  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHKeyPath
	I1008 15:11:53.007953  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHUsername
	I1008 15:11:53.008184  401545 main.go:141] libmachine: (pause-783785) Calling .GetSSHUsername
	I1008 15:11:53.008182  401545 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/pause-783785/id_rsa Username:docker}
	I1008 15:11:53.008394  401545 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/pause-783785/id_rsa Username:docker}
	I1008 15:11:53.094477  401545 ssh_runner.go:195] Run: systemctl --version
	I1008 15:11:53.134392  401545 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 15:11:53.290736  401545 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 15:11:53.299506  401545 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 15:11:53.299617  401545 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 15:11:53.312550  401545 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 15:11:53.312585  401545 start.go:495] detecting cgroup driver to use...
	I1008 15:11:53.312684  401545 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 15:11:53.338668  401545 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 15:11:53.359336  401545 docker.go:218] disabling cri-docker service (if available) ...
	I1008 15:11:53.359425  401545 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 15:11:53.384522  401545 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 15:11:53.403606  401545 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 15:11:53.588633  401545 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 15:11:53.782175  401545 docker.go:234] disabling docker service ...
	I1008 15:11:53.782273  401545 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 15:11:53.818510  401545 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 15:11:53.836736  401545 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 15:11:54.031315  401545 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 15:11:54.227209  401545 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 15:11:54.247105  401545 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 15:11:54.277214  401545 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 15:11:54.277286  401545 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:11:54.297808  401545 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 15:11:54.297883  401545 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:11:54.317307  401545 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:11:54.337057  401545 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:11:54.351098  401545 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 15:11:54.367640  401545 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:11:54.380813  401545 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:11:54.395024  401545 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 15:11:54.411002  401545 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 15:11:54.422846  401545 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 15:11:54.437176  401545 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:11:54.659466  401545 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 15:11:55.241582  401545 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 15:11:55.241713  401545 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 15:11:55.248800  401545 start.go:563] Will wait 60s for crictl version
	I1008 15:11:55.248899  401545 ssh_runner.go:195] Run: which crictl
	I1008 15:11:55.253472  401545 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 15:11:55.296420  401545 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1008 15:11:55.296534  401545 ssh_runner.go:195] Run: crio --version
	I1008 15:11:55.332005  401545 ssh_runner.go:195] Run: crio --version
	I1008 15:11:55.367688  401545 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1008 15:11:55.368997  401545 main.go:141] libmachine: (pause-783785) Calling .GetIP
	I1008 15:11:55.372190  401545 main.go:141] libmachine: (pause-783785) DBG | domain pause-783785 has defined MAC address 52:54:00:df:a5:81 in network mk-pause-783785
	I1008 15:11:55.372728  401545 main.go:141] libmachine: (pause-783785) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:a5:81", ip: ""} in network mk-pause-783785: {Iface:virbr1 ExpiryTime:2025-10-08 16:09:59 +0000 UTC Type:0 Mac:52:54:00:df:a5:81 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:pause-783785 Clientid:01:52:54:00:df:a5:81}
	I1008 15:11:55.372751  401545 main.go:141] libmachine: (pause-783785) DBG | domain pause-783785 has defined IP address 192.168.39.7 and MAC address 52:54:00:df:a5:81 in network mk-pause-783785
	I1008 15:11:55.372997  401545 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1008 15:11:55.378634  401545 kubeadm.go:883] updating cluster {Name:pause-783785 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-783785 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia
-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 15:11:55.378774  401545 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:11:55.378820  401545 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:11:55.441475  401545 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:11:55.441510  401545 crio.go:433] Images already preloaded, skipping extraction
	I1008 15:11:55.441612  401545 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 15:11:55.486100  401545 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 15:11:55.486126  401545 cache_images.go:85] Images are preloaded, skipping loading
	I1008 15:11:55.486133  401545 kubeadm.go:934] updating node { 192.168.39.7 8443 v1.34.1 crio true true} ...
	I1008 15:11:55.486236  401545 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-783785 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-783785 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 15:11:55.486317  401545 ssh_runner.go:195] Run: crio config
	I1008 15:11:55.545043  401545 cni.go:84] Creating CNI manager for ""
	I1008 15:11:55.545071  401545 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 15:11:55.545095  401545 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 15:11:55.545122  401545 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.7 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-783785 NodeName:pause-783785 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 15:11:55.545320  401545 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.7
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-783785"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.7"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.7"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 15:11:55.545425  401545 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 15:11:55.558539  401545 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 15:11:55.558640  401545 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 15:11:55.572012  401545 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I1008 15:11:55.598327  401545 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 15:11:55.622575  401545 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1008 15:11:55.678211  401545 ssh_runner.go:195] Run: grep 192.168.39.7	control-plane.minikube.internal$ /etc/hosts
	I1008 15:11:55.690250  401545 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 15:11:56.073317  401545 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 15:11:56.105630  401545 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/pause-783785 for IP: 192.168.39.7
	I1008 15:11:56.105659  401545 certs.go:195] generating shared ca certs ...
	I1008 15:11:56.105684  401545 certs.go:227] acquiring lock for ca certs: {Name:mk0e7909a623394743b0dc10595ebb34d09a814f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:11:56.105904  401545 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-357044/.minikube/ca.key
	I1008 15:11:56.106004  401545 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-357044/.minikube/proxy-client-ca.key
	I1008 15:11:56.106030  401545 certs.go:257] generating profile certs ...
	I1008 15:11:56.106174  401545 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/pause-783785/client.key
	I1008 15:11:56.106278  401545 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/pause-783785/apiserver.key.2f6b0bf4
	I1008 15:11:56.106349  401545 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/pause-783785/proxy-client.key
	I1008 15:11:56.106530  401545 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-357044/.minikube/certs/361915.pem (1338 bytes)
	W1008 15:11:56.106578  401545 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-357044/.minikube/certs/361915_empty.pem, impossibly tiny 0 bytes
	I1008 15:11:56.106593  401545 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-357044/.minikube/certs/ca-key.pem (1679 bytes)
	I1008 15:11:56.106628  401545 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-357044/.minikube/certs/ca.pem (1082 bytes)
	I1008 15:11:56.106660  401545 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-357044/.minikube/certs/cert.pem (1123 bytes)
	I1008 15:11:56.106691  401545 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-357044/.minikube/certs/key.pem (1675 bytes)
	I1008 15:11:56.106755  401545 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-357044/.minikube/files/etc/ssl/certs/3619152.pem (1708 bytes)
	I1008 15:11:56.107794  401545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 15:11:56.160100  401545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 15:11:56.235967  401545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 15:11:56.318152  401545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 15:11:56.419107  401545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/pause-783785/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1008 15:11:56.497592  401545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/pause-783785/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 15:11:56.595956  401545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/pause-783785/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 15:11:56.683814  401545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/pause-783785/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 15:11:56.804934  401545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/files/etc/ssl/certs/3619152.pem --> /usr/share/ca-certificates/3619152.pem (1708 bytes)
	I1008 15:11:56.871378  401545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 15:11:56.929133  401545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/certs/361915.pem --> /usr/share/ca-certificates/361915.pem (1338 bytes)
	I1008 15:11:56.973602  401545 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 15:11:57.003505  401545 ssh_runner.go:195] Run: openssl version
	I1008 15:11:57.011447  401545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3619152.pem && ln -fs /usr/share/ca-certificates/3619152.pem /etc/ssl/certs/3619152.pem"
	I1008 15:11:57.034335  401545 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3619152.pem
	I1008 15:11:57.040592  401545 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:18 /usr/share/ca-certificates/3619152.pem
	I1008 15:11:57.040676  401545 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3619152.pem
	I1008 15:11:57.050049  401545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3619152.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 15:11:57.066604  401545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 15:11:57.082566  401545 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:11:57.088594  401545 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:10 /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:11:57.088685  401545 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:11:57.099638  401545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 15:11:57.115035  401545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/361915.pem && ln -fs /usr/share/ca-certificates/361915.pem /etc/ssl/certs/361915.pem"
	I1008 15:11:57.134191  401545 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/361915.pem
	I1008 15:11:57.140398  401545 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:18 /usr/share/ca-certificates/361915.pem
	I1008 15:11:57.140464  401545 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/361915.pem
	I1008 15:11:57.149173  401545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/361915.pem /etc/ssl/certs/51391683.0"
	I1008 15:11:57.167574  401545 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 15:11:57.173819  401545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 15:11:57.182409  401545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 15:11:57.190092  401545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 15:11:57.198691  401545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 15:11:57.209521  401545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 15:11:57.219736  401545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 15:11:57.230308  401545 kubeadm.go:400] StartCluster: {Name:pause-783785 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-783785 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gp
u-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:11:57.230584  401545 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 15:11:57.230700  401545 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 15:11:57.300787  401545 cri.go:89] found id: "e821f660efb5e2538c3c2a713432a6d4af63ee0d7a16d5374a3916fc582f951b"
	I1008 15:11:57.300813  401545 cri.go:89] found id: "f6878c36ed7af18cf6af91f35951258569707fe5d2f84cde06bb7dba17836102"
	I1008 15:11:57.300818  401545 cri.go:89] found id: "d6c7bb3166634af076974a3a79cbe1aa08eae7b0bf3ff87e3f5f736a159fa8c2"
	I1008 15:11:57.300823  401545 cri.go:89] found id: "801dd5f1a7764fce60d4f449f4eab05c60098b07c81b14896a3d73c102e3abbb"
	I1008 15:11:57.300827  401545 cri.go:89] found id: "c482e9b3ecc1c243792d9a6378e68a7a0b743ec05a40182714fb7bbc7d064f9f"
	I1008 15:11:57.300832  401545 cri.go:89] found id: "a03b02771b303b439d2dca3146ac53983f3827e98916471ef7a9b72479a58077"
	I1008 15:11:57.300836  401545 cri.go:89] found id: "6e478bab5a229891f3fad5b891af68ff928ef74d4ae3375fb11f62d73f186c69"
	I1008 15:11:57.300840  401545 cri.go:89] found id: "1774dda7e2ceeff72cdbff22626ef4021bd93ec83c14b4bfe6bba4ce3612e008"
	I1008 15:11:57.300844  401545 cri.go:89] found id: "2f67743cd9f9af9366555a6745ec9c9cec802d09757dbc6682b01f7d7212e3b6"
	I1008 15:11:57.300873  401545 cri.go:89] found id: "55644acd52a1a5afd7d327bc9068fb879150c76d1ebac1924ef3d949ae33450b"
	I1008 15:11:57.300881  401545 cri.go:89] found id: ""
	I1008 15:11:57.300984  401545 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-783785 -n pause-783785
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-783785 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-783785 logs -n 25: (1.711201559s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p cert-options-181493                                                                                                                                             │ cert-options-181493       │ jenkins │ v1.37.0 │ 08 Oct 25 15:08 UTC │ 08 Oct 25 15:08 UTC │
	│ start   │ -p kubernetes-upgrade-074115 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-074115 │ jenkins │ v1.37.0 │ 08 Oct 25 15:08 UTC │ 08 Oct 25 15:09 UTC │
	│ ssh     │ -p NoKubernetes-694490 sudo systemctl is-active --quiet service kubelet                                                                                            │ NoKubernetes-694490       │ jenkins │ v1.37.0 │ 08 Oct 25 15:08 UTC │                     │
	│ stop    │ -p NoKubernetes-694490                                                                                                                                             │ NoKubernetes-694490       │ jenkins │ v1.37.0 │ 08 Oct 25 15:08 UTC │ 08 Oct 25 15:08 UTC │
	│ start   │ -p NoKubernetes-694490 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                         │ NoKubernetes-694490       │ jenkins │ v1.37.0 │ 08 Oct 25 15:08 UTC │ 08 Oct 25 15:09 UTC │
	│ start   │ -p running-upgrade-280930 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                 │ running-upgrade-280930    │ jenkins │ v1.37.0 │ 08 Oct 25 15:09 UTC │ 08 Oct 25 15:10 UTC │
	│ ssh     │ -p NoKubernetes-694490 sudo systemctl is-active --quiet service kubelet                                                                                            │ NoKubernetes-694490       │ jenkins │ v1.37.0 │ 08 Oct 25 15:09 UTC │                     │
	│ delete  │ -p NoKubernetes-694490                                                                                                                                             │ NoKubernetes-694490       │ jenkins │ v1.37.0 │ 08 Oct 25 15:09 UTC │ 08 Oct 25 15:09 UTC │
	│ start   │ -p pause-783785 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                │ pause-783785              │ jenkins │ v1.37.0 │ 08 Oct 25 15:09 UTC │ 08 Oct 25 15:11 UTC │
	│ stop    │ -p kubernetes-upgrade-074115                                                                                                                                       │ kubernetes-upgrade-074115 │ jenkins │ v1.37.0 │ 08 Oct 25 15:09 UTC │ 08 Oct 25 15:09 UTC │
	│ start   │ -p kubernetes-upgrade-074115 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-074115 │ jenkins │ v1.37.0 │ 08 Oct 25 15:09 UTC │ 08 Oct 25 15:10 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-280930 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker        │ running-upgrade-280930    │ jenkins │ v1.37.0 │ 08 Oct 25 15:10 UTC │                     │
	│ delete  │ -p running-upgrade-280930                                                                                                                                          │ running-upgrade-280930    │ jenkins │ v1.37.0 │ 08 Oct 25 15:10 UTC │ 08 Oct 25 15:10 UTC │
	│ start   │ -p stopped-upgrade-236862 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                     │ stopped-upgrade-236862    │ jenkins │ v1.32.0 │ 08 Oct 25 15:10 UTC │ 08 Oct 25 15:11 UTC │
	│ start   │ -p kubernetes-upgrade-074115 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                        │ kubernetes-upgrade-074115 │ jenkins │ v1.37.0 │ 08 Oct 25 15:10 UTC │                     │
	│ start   │ -p kubernetes-upgrade-074115 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-074115 │ jenkins │ v1.37.0 │ 08 Oct 25 15:10 UTC │ 08 Oct 25 15:10 UTC │
	│ delete  │ -p kubernetes-upgrade-074115                                                                                                                                       │ kubernetes-upgrade-074115 │ jenkins │ v1.37.0 │ 08 Oct 25 15:10 UTC │ 08 Oct 25 15:10 UTC │
	│ start   │ -p auto-900200 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                  │ auto-900200               │ jenkins │ v1.37.0 │ 08 Oct 25 15:10 UTC │                     │
	│ stop    │ stopped-upgrade-236862 stop                                                                                                                                        │ stopped-upgrade-236862    │ jenkins │ v1.32.0 │ 08 Oct 25 15:11 UTC │ 08 Oct 25 15:11 UTC │
	│ start   │ -p stopped-upgrade-236862 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                 │ stopped-upgrade-236862    │ jenkins │ v1.37.0 │ 08 Oct 25 15:11 UTC │ 08 Oct 25 15:11 UTC │
	│ start   │ -p cert-expiration-189103 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                │ cert-expiration-189103    │ jenkins │ v1.37.0 │ 08 Oct 25 15:11 UTC │                     │
	│ start   │ -p pause-783785 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                         │ pause-783785              │ jenkins │ v1.37.0 │ 08 Oct 25 15:11 UTC │ 08 Oct 25 15:12 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-236862 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker        │ stopped-upgrade-236862    │ jenkins │ v1.37.0 │ 08 Oct 25 15:11 UTC │                     │
	│ delete  │ -p stopped-upgrade-236862                                                                                                                                          │ stopped-upgrade-236862    │ jenkins │ v1.37.0 │ 08 Oct 25 15:11 UTC │ 08 Oct 25 15:11 UTC │
	│ start   │ -p kindnet-900200 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kindnet-900200            │ jenkins │ v1.37.0 │ 08 Oct 25 15:11 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 15:11:58
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 15:11:58.135711  402103 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:11:58.135987  402103 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:11:58.136000  402103 out.go:374] Setting ErrFile to fd 2...
	I1008 15:11:58.136007  402103 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:11:58.136323  402103 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-357044/.minikube/bin
	I1008 15:11:58.137038  402103 out.go:368] Setting JSON to false
	I1008 15:11:58.138435  402103 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6850,"bootTime":1759929468,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 15:11:58.138615  402103 start.go:141] virtualization: kvm guest
	I1008 15:11:58.140703  402103 out.go:179] * [kindnet-900200] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 15:11:58.142009  402103 notify.go:220] Checking for updates...
	I1008 15:11:58.142028  402103 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 15:11:58.143299  402103 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 15:11:58.144715  402103 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-357044/kubeconfig
	I1008 15:11:58.146054  402103 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-357044/.minikube
	I1008 15:11:58.147257  402103 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 15:11:58.148398  402103 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	W1008 15:11:53.953856  401204 pod_ready.go:104] pod "coredns-66bc5c9577-57htw" is not "Ready", error: <nil>
	W1008 15:11:55.955729  401204 pod_ready.go:104] pod "coredns-66bc5c9577-57htw" is not "Ready", error: <nil>
	I1008 15:11:58.150141  402103 config.go:182] Loaded profile config "auto-900200": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:11:58.150307  402103 config.go:182] Loaded profile config "cert-expiration-189103": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:11:58.150558  402103 config.go:182] Loaded profile config "pause-783785": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:11:58.150699  402103 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 15:11:58.189052  402103 out.go:179] * Using the kvm2 driver based on user configuration
	I1008 15:11:58.190192  402103 start.go:305] selected driver: kvm2
	I1008 15:11:58.190216  402103 start.go:925] validating driver "kvm2" against <nil>
	I1008 15:11:58.190235  402103 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 15:11:58.191323  402103 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 15:11:58.191461  402103 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21681-357044/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 15:11:58.206506  402103 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1008 15:11:58.206551  402103 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21681-357044/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 15:11:58.221527  402103 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1008 15:11:58.221587  402103 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1008 15:11:58.221991  402103 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 15:11:58.222031  402103 cni.go:84] Creating CNI manager for "kindnet"
	I1008 15:11:58.222042  402103 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 15:11:58.222110  402103 start.go:349] cluster config:
	{Name:kindnet-900200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-900200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:11:58.222241  402103 iso.go:125] acquiring lock: {Name:mkaa45da6237a5a16f5f1d676ea2e57ba969b9e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 15:11:58.224944  402103 out.go:179] * Starting "kindnet-900200" primary control-plane node in "kindnet-900200" cluster
	I1008 15:11:58.226294  402103 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:11:58.226370  402103 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-357044/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1008 15:11:58.226387  402103 cache.go:58] Caching tarball of preloaded images
	I1008 15:11:58.226537  402103 preload.go:233] Found /home/jenkins/minikube-integration/21681-357044/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 15:11:58.226555  402103 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 15:11:58.226701  402103 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/kindnet-900200/config.json ...
	I1008 15:11:58.226726  402103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/kindnet-900200/config.json: {Name:mka85815c87509d162402b5d9c001f1ce96fbd89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:11:58.226919  402103 start.go:360] acquireMachinesLock for kindnet-900200: {Name:mka12a7774d0aa7dccf7190e47a0dc3a854191d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 15:11:58.226958  402103 start.go:364] duration metric: took 21.309µs to acquireMachinesLock for "kindnet-900200"
	I1008 15:11:58.226980  402103 start.go:93] Provisioning new machine with config: &{Name:kindnet-900200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.34.1 ClusterName:kindnet-900200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 15:11:58.227051  402103 start.go:125] createHost starting for "" (driver="kvm2")
	I1008 15:11:56.871378  401545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 15:11:56.929133  401545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/certs/361915.pem --> /usr/share/ca-certificates/361915.pem (1338 bytes)
	I1008 15:11:56.973602  401545 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 15:11:57.003505  401545 ssh_runner.go:195] Run: openssl version
	I1008 15:11:57.011447  401545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3619152.pem && ln -fs /usr/share/ca-certificates/3619152.pem /etc/ssl/certs/3619152.pem"
	I1008 15:11:57.034335  401545 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3619152.pem
	I1008 15:11:57.040592  401545 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:18 /usr/share/ca-certificates/3619152.pem
	I1008 15:11:57.040676  401545 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3619152.pem
	I1008 15:11:57.050049  401545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3619152.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 15:11:57.066604  401545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 15:11:57.082566  401545 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:11:57.088594  401545 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:10 /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:11:57.088685  401545 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:11:57.099638  401545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 15:11:57.115035  401545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/361915.pem && ln -fs /usr/share/ca-certificates/361915.pem /etc/ssl/certs/361915.pem"
	I1008 15:11:57.134191  401545 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/361915.pem
	I1008 15:11:57.140398  401545 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:18 /usr/share/ca-certificates/361915.pem
	I1008 15:11:57.140464  401545 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/361915.pem
	I1008 15:11:57.149173  401545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/361915.pem /etc/ssl/certs/51391683.0"
	I1008 15:11:57.167574  401545 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 15:11:57.173819  401545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 15:11:57.182409  401545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 15:11:57.190092  401545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 15:11:57.198691  401545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 15:11:57.209521  401545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 15:11:57.219736  401545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 15:11:57.230308  401545 kubeadm.go:400] StartCluster: {Name:pause-783785 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-783785 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gp
u-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:11:57.230584  401545 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 15:11:57.230700  401545 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 15:11:57.300787  401545 cri.go:89] found id: "e821f660efb5e2538c3c2a713432a6d4af63ee0d7a16d5374a3916fc582f951b"
	I1008 15:11:57.300813  401545 cri.go:89] found id: "f6878c36ed7af18cf6af91f35951258569707fe5d2f84cde06bb7dba17836102"
	I1008 15:11:57.300818  401545 cri.go:89] found id: "d6c7bb3166634af076974a3a79cbe1aa08eae7b0bf3ff87e3f5f736a159fa8c2"
	I1008 15:11:57.300823  401545 cri.go:89] found id: "801dd5f1a7764fce60d4f449f4eab05c60098b07c81b14896a3d73c102e3abbb"
	I1008 15:11:57.300827  401545 cri.go:89] found id: "c482e9b3ecc1c243792d9a6378e68a7a0b743ec05a40182714fb7bbc7d064f9f"
	I1008 15:11:57.300832  401545 cri.go:89] found id: "a03b02771b303b439d2dca3146ac53983f3827e98916471ef7a9b72479a58077"
	I1008 15:11:57.300836  401545 cri.go:89] found id: "6e478bab5a229891f3fad5b891af68ff928ef74d4ae3375fb11f62d73f186c69"
	I1008 15:11:57.300840  401545 cri.go:89] found id: "1774dda7e2ceeff72cdbff22626ef4021bd93ec83c14b4bfe6bba4ce3612e008"
	I1008 15:11:57.300844  401545 cri.go:89] found id: "2f67743cd9f9af9366555a6745ec9c9cec802d09757dbc6682b01f7d7212e3b6"
	I1008 15:11:57.300873  401545 cri.go:89] found id: "55644acd52a1a5afd7d327bc9068fb879150c76d1ebac1924ef3d949ae33450b"
	I1008 15:11:57.300881  401545 cri.go:89] found id: ""
	I1008 15:11:57.300984  401545 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-783785 -n pause-783785
helpers_test.go:269: (dbg) Run:  kubectl --context pause-783785 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-783785 -n pause-783785
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-783785 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-783785 logs -n 25: (1.749117275s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p kubernetes-upgrade-074115 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-074115 │ jenkins │ v1.37.0 │ 08 Oct 25 15:08 UTC │ 08 Oct 25 15:09 UTC │
	│ ssh     │ -p NoKubernetes-694490 sudo systemctl is-active --quiet service kubelet                                                                                            │ NoKubernetes-694490       │ jenkins │ v1.37.0 │ 08 Oct 25 15:08 UTC │                     │
	│ stop    │ -p NoKubernetes-694490                                                                                                                                             │ NoKubernetes-694490       │ jenkins │ v1.37.0 │ 08 Oct 25 15:08 UTC │ 08 Oct 25 15:08 UTC │
	│ start   │ -p NoKubernetes-694490 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                         │ NoKubernetes-694490       │ jenkins │ v1.37.0 │ 08 Oct 25 15:08 UTC │ 08 Oct 25 15:09 UTC │
	│ start   │ -p running-upgrade-280930 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                 │ running-upgrade-280930    │ jenkins │ v1.37.0 │ 08 Oct 25 15:09 UTC │ 08 Oct 25 15:10 UTC │
	│ ssh     │ -p NoKubernetes-694490 sudo systemctl is-active --quiet service kubelet                                                                                            │ NoKubernetes-694490       │ jenkins │ v1.37.0 │ 08 Oct 25 15:09 UTC │                     │
	│ delete  │ -p NoKubernetes-694490                                                                                                                                             │ NoKubernetes-694490       │ jenkins │ v1.37.0 │ 08 Oct 25 15:09 UTC │ 08 Oct 25 15:09 UTC │
	│ start   │ -p pause-783785 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                │ pause-783785              │ jenkins │ v1.37.0 │ 08 Oct 25 15:09 UTC │ 08 Oct 25 15:11 UTC │
	│ stop    │ -p kubernetes-upgrade-074115                                                                                                                                       │ kubernetes-upgrade-074115 │ jenkins │ v1.37.0 │ 08 Oct 25 15:09 UTC │ 08 Oct 25 15:09 UTC │
	│ start   │ -p kubernetes-upgrade-074115 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-074115 │ jenkins │ v1.37.0 │ 08 Oct 25 15:09 UTC │ 08 Oct 25 15:10 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-280930 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker        │ running-upgrade-280930    │ jenkins │ v1.37.0 │ 08 Oct 25 15:10 UTC │                     │
	│ delete  │ -p running-upgrade-280930                                                                                                                                          │ running-upgrade-280930    │ jenkins │ v1.37.0 │ 08 Oct 25 15:10 UTC │ 08 Oct 25 15:10 UTC │
	│ start   │ -p stopped-upgrade-236862 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                     │ stopped-upgrade-236862    │ jenkins │ v1.32.0 │ 08 Oct 25 15:10 UTC │ 08 Oct 25 15:11 UTC │
	│ start   │ -p kubernetes-upgrade-074115 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                        │ kubernetes-upgrade-074115 │ jenkins │ v1.37.0 │ 08 Oct 25 15:10 UTC │                     │
	│ start   │ -p kubernetes-upgrade-074115 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-074115 │ jenkins │ v1.37.0 │ 08 Oct 25 15:10 UTC │ 08 Oct 25 15:10 UTC │
	│ delete  │ -p kubernetes-upgrade-074115                                                                                                                                       │ kubernetes-upgrade-074115 │ jenkins │ v1.37.0 │ 08 Oct 25 15:10 UTC │ 08 Oct 25 15:10 UTC │
	│ start   │ -p auto-900200 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                  │ auto-900200               │ jenkins │ v1.37.0 │ 08 Oct 25 15:10 UTC │ 08 Oct 25 15:12 UTC │
	│ stop    │ stopped-upgrade-236862 stop                                                                                                                                        │ stopped-upgrade-236862    │ jenkins │ v1.32.0 │ 08 Oct 25 15:11 UTC │ 08 Oct 25 15:11 UTC │
	│ start   │ -p stopped-upgrade-236862 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                 │ stopped-upgrade-236862    │ jenkins │ v1.37.0 │ 08 Oct 25 15:11 UTC │ 08 Oct 25 15:11 UTC │
	│ start   │ -p cert-expiration-189103 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                │ cert-expiration-189103    │ jenkins │ v1.37.0 │ 08 Oct 25 15:11 UTC │                     │
	│ start   │ -p pause-783785 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                         │ pause-783785              │ jenkins │ v1.37.0 │ 08 Oct 25 15:11 UTC │ 08 Oct 25 15:12 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-236862 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker        │ stopped-upgrade-236862    │ jenkins │ v1.37.0 │ 08 Oct 25 15:11 UTC │                     │
	│ delete  │ -p stopped-upgrade-236862                                                                                                                                          │ stopped-upgrade-236862    │ jenkins │ v1.37.0 │ 08 Oct 25 15:11 UTC │ 08 Oct 25 15:11 UTC │
	│ start   │ -p kindnet-900200 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kindnet-900200            │ jenkins │ v1.37.0 │ 08 Oct 25 15:11 UTC │                     │
	│ ssh     │ -p auto-900200 pgrep -a kubelet                                                                                                                                    │ auto-900200               │ jenkins │ v1.37.0 │ 08 Oct 25 15:12 UTC │ 08 Oct 25 15:12 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 15:11:58
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 15:11:58.135711  402103 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:11:58.135987  402103 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:11:58.136000  402103 out.go:374] Setting ErrFile to fd 2...
	I1008 15:11:58.136007  402103 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:11:58.136323  402103 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-357044/.minikube/bin
	I1008 15:11:58.137038  402103 out.go:368] Setting JSON to false
	I1008 15:11:58.138435  402103 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6850,"bootTime":1759929468,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 15:11:58.138615  402103 start.go:141] virtualization: kvm guest
	I1008 15:11:58.140703  402103 out.go:179] * [kindnet-900200] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 15:11:58.142009  402103 notify.go:220] Checking for updates...
	I1008 15:11:58.142028  402103 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 15:11:58.143299  402103 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 15:11:58.144715  402103 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-357044/kubeconfig
	I1008 15:11:58.146054  402103 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-357044/.minikube
	I1008 15:11:58.147257  402103 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 15:11:58.148398  402103 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	W1008 15:11:53.953856  401204 pod_ready.go:104] pod "coredns-66bc5c9577-57htw" is not "Ready", error: <nil>
	W1008 15:11:55.955729  401204 pod_ready.go:104] pod "coredns-66bc5c9577-57htw" is not "Ready", error: <nil>
	I1008 15:11:58.150141  402103 config.go:182] Loaded profile config "auto-900200": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:11:58.150307  402103 config.go:182] Loaded profile config "cert-expiration-189103": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:11:58.150558  402103 config.go:182] Loaded profile config "pause-783785": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:11:58.150699  402103 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 15:11:58.189052  402103 out.go:179] * Using the kvm2 driver based on user configuration
	I1008 15:11:58.190192  402103 start.go:305] selected driver: kvm2
	I1008 15:11:58.190216  402103 start.go:925] validating driver "kvm2" against <nil>
	I1008 15:11:58.190235  402103 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 15:11:58.191323  402103 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 15:11:58.191461  402103 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21681-357044/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 15:11:58.206506  402103 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1008 15:11:58.206551  402103 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21681-357044/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 15:11:58.221527  402103 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1008 15:11:58.221587  402103 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1008 15:11:58.221991  402103 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 15:11:58.222031  402103 cni.go:84] Creating CNI manager for "kindnet"
	I1008 15:11:58.222042  402103 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 15:11:58.222110  402103 start.go:349] cluster config:
	{Name:kindnet-900200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-900200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:11:58.222241  402103 iso.go:125] acquiring lock: {Name:mkaa45da6237a5a16f5f1d676ea2e57ba969b9e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 15:11:58.224944  402103 out.go:179] * Starting "kindnet-900200" primary control-plane node in "kindnet-900200" cluster
	I1008 15:11:58.226294  402103 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 15:11:58.226370  402103 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-357044/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1008 15:11:58.226387  402103 cache.go:58] Caching tarball of preloaded images
	I1008 15:11:58.226537  402103 preload.go:233] Found /home/jenkins/minikube-integration/21681-357044/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1008 15:11:58.226555  402103 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 15:11:58.226701  402103 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/kindnet-900200/config.json ...
	I1008 15:11:58.226726  402103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/kindnet-900200/config.json: {Name:mka85815c87509d162402b5d9c001f1ce96fbd89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 15:11:58.226919  402103 start.go:360] acquireMachinesLock for kindnet-900200: {Name:mka12a7774d0aa7dccf7190e47a0dc3a854191d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 15:11:58.226958  402103 start.go:364] duration metric: took 21.309µs to acquireMachinesLock for "kindnet-900200"
	I1008 15:11:58.226980  402103 start.go:93] Provisioning new machine with config: &{Name:kindnet-900200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.34.1 ClusterName:kindnet-900200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 15:11:58.227051  402103 start.go:125] createHost starting for "" (driver="kvm2")
	I1008 15:11:56.871378  401545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 15:11:56.929133  401545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-357044/.minikube/certs/361915.pem --> /usr/share/ca-certificates/361915.pem (1338 bytes)
	I1008 15:11:56.973602  401545 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 15:11:57.003505  401545 ssh_runner.go:195] Run: openssl version
	I1008 15:11:57.011447  401545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3619152.pem && ln -fs /usr/share/ca-certificates/3619152.pem /etc/ssl/certs/3619152.pem"
	I1008 15:11:57.034335  401545 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3619152.pem
	I1008 15:11:57.040592  401545 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 14:18 /usr/share/ca-certificates/3619152.pem
	I1008 15:11:57.040676  401545 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3619152.pem
	I1008 15:11:57.050049  401545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3619152.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 15:11:57.066604  401545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 15:11:57.082566  401545 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:11:57.088594  401545 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 14:10 /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:11:57.088685  401545 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 15:11:57.099638  401545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 15:11:57.115035  401545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/361915.pem && ln -fs /usr/share/ca-certificates/361915.pem /etc/ssl/certs/361915.pem"
	I1008 15:11:57.134191  401545 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/361915.pem
	I1008 15:11:57.140398  401545 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 14:18 /usr/share/ca-certificates/361915.pem
	I1008 15:11:57.140464  401545 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/361915.pem
	I1008 15:11:57.149173  401545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/361915.pem /etc/ssl/certs/51391683.0"
	I1008 15:11:57.167574  401545 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 15:11:57.173819  401545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 15:11:57.182409  401545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 15:11:57.190092  401545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 15:11:57.198691  401545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 15:11:57.209521  401545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 15:11:57.219736  401545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 15:11:57.230308  401545 kubeadm.go:400] StartCluster: {Name:pause-783785 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-783785 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gp
u-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 15:11:57.230584  401545 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 15:11:57.230700  401545 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 15:11:57.300787  401545 cri.go:89] found id: "e821f660efb5e2538c3c2a713432a6d4af63ee0d7a16d5374a3916fc582f951b"
	I1008 15:11:57.300813  401545 cri.go:89] found id: "f6878c36ed7af18cf6af91f35951258569707fe5d2f84cde06bb7dba17836102"
	I1008 15:11:57.300818  401545 cri.go:89] found id: "d6c7bb3166634af076974a3a79cbe1aa08eae7b0bf3ff87e3f5f736a159fa8c2"
	I1008 15:11:57.300823  401545 cri.go:89] found id: "801dd5f1a7764fce60d4f449f4eab05c60098b07c81b14896a3d73c102e3abbb"
	I1008 15:11:57.300827  401545 cri.go:89] found id: "c482e9b3ecc1c243792d9a6378e68a7a0b743ec05a40182714fb7bbc7d064f9f"
	I1008 15:11:57.300832  401545 cri.go:89] found id: "a03b02771b303b439d2dca3146ac53983f3827e98916471ef7a9b72479a58077"
	I1008 15:11:57.300836  401545 cri.go:89] found id: "6e478bab5a229891f3fad5b891af68ff928ef74d4ae3375fb11f62d73f186c69"
	I1008 15:11:57.300840  401545 cri.go:89] found id: "1774dda7e2ceeff72cdbff22626ef4021bd93ec83c14b4bfe6bba4ce3612e008"
	I1008 15:11:57.300844  401545 cri.go:89] found id: "2f67743cd9f9af9366555a6745ec9c9cec802d09757dbc6682b01f7d7212e3b6"
	I1008 15:11:57.300873  401545 cri.go:89] found id: "55644acd52a1a5afd7d327bc9068fb879150c76d1ebac1924ef3d949ae33450b"
	I1008 15:11:57.300881  401545 cri.go:89] found id: ""
	I1008 15:11:57.300984  401545 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-783785 -n pause-783785
helpers_test.go:269: (dbg) Run:  kubectl --context pause-783785 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (80.37s)

                                                
                                    

Test pass (281/324)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 21.83
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.15
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 11.16
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.16
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.67
22 TestOffline 119.95
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 200.83
31 TestAddons/serial/GCPAuth/Namespaces 0.17
32 TestAddons/serial/GCPAuth/FakeCredentials 9.53
35 TestAddons/parallel/Registry 20.93
36 TestAddons/parallel/RegistryCreds 0.96
38 TestAddons/parallel/InspektorGadget 6.53
39 TestAddons/parallel/MetricsServer 7.52
41 TestAddons/parallel/CSI 49.39
42 TestAddons/parallel/Headlamp 20.71
43 TestAddons/parallel/CloudSpanner 7.02
44 TestAddons/parallel/LocalPath 56.2
45 TestAddons/parallel/NvidiaDevicePlugin 6.93
46 TestAddons/parallel/Yakd 12.29
48 TestAddons/StoppedEnableDisable 83.89
49 TestCertOptions 82.75
50 TestCertExpiration 409.28
52 TestForceSystemdFlag 60.42
53 TestForceSystemdEnv 41.65
55 TestKVMDriverInstallOrUpdate 1.2
59 TestErrorSpam/setup 37.23
60 TestErrorSpam/start 0.38
61 TestErrorSpam/status 0.84
62 TestErrorSpam/pause 1.72
63 TestErrorSpam/unpause 2.06
64 TestErrorSpam/stop 4.28
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 79.44
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 35.33
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.08
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.59
76 TestFunctional/serial/CacheCmd/cache/add_local 2.16
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.75
81 TestFunctional/serial/CacheCmd/cache/delete 0.11
82 TestFunctional/serial/MinikubeKubectlCmd 0.12
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 49.42
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.5
87 TestFunctional/serial/LogsFileCmd 1.51
88 TestFunctional/serial/InvalidService 4.29
90 TestFunctional/parallel/ConfigCmd 0.36
91 TestFunctional/parallel/DashboardCmd 32.97
92 TestFunctional/parallel/DryRun 0.34
93 TestFunctional/parallel/InternationalLanguage 0.17
94 TestFunctional/parallel/StatusCmd 1.07
98 TestFunctional/parallel/ServiceCmdConnect 9.62
99 TestFunctional/parallel/AddonsCmd 0.13
100 TestFunctional/parallel/PersistentVolumeClaim 48.7
102 TestFunctional/parallel/SSHCmd 0.45
103 TestFunctional/parallel/CpCmd 1.38
104 TestFunctional/parallel/MySQL 26.05
105 TestFunctional/parallel/FileSync 0.26
106 TestFunctional/parallel/CertSync 1.24
110 TestFunctional/parallel/NodeLabels 0.07
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.47
114 TestFunctional/parallel/License 0.37
115 TestFunctional/parallel/Version/short 0.1
116 TestFunctional/parallel/Version/components 0.61
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
121 TestFunctional/parallel/ImageCommands/ImageBuild 4.35
122 TestFunctional/parallel/ImageCommands/Setup 1.75
123 TestFunctional/parallel/ServiceCmd/DeployApp 9.21
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.4
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.92
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.7
136 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.52
137 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.76
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.57
140 TestFunctional/parallel/ServiceCmd/List 0.3
141 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
142 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
143 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
144 TestFunctional/parallel/ServiceCmd/JSONOutput 0.29
145 TestFunctional/parallel/ServiceCmd/HTTPS 0.41
146 TestFunctional/parallel/ServiceCmd/Format 0.47
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.58
148 TestFunctional/parallel/ServiceCmd/URL 0.51
149 TestFunctional/parallel/MountCmd/any-port 22.79
150 TestFunctional/parallel/ProfileCmd/profile_list 0.51
151 TestFunctional/parallel/ProfileCmd/profile_json_output 0.5
152 TestFunctional/parallel/MountCmd/specific-port 1.84
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.67
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 233.01
162 TestMultiControlPlane/serial/DeployApp 6.87
163 TestMultiControlPlane/serial/PingHostFromPods 1.25
164 TestMultiControlPlane/serial/AddWorkerNode 46.34
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.91
167 TestMultiControlPlane/serial/CopyFile 13.55
168 TestMultiControlPlane/serial/StopSecondaryNode 82.38
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.71
170 TestMultiControlPlane/serial/RestartSecondaryNode 36.44
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.2
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 389.75
173 TestMultiControlPlane/serial/DeleteSecondaryNode 18.63
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.71
175 TestMultiControlPlane/serial/StopCluster 234.99
176 TestMultiControlPlane/serial/RestartCluster 93.15
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.67
178 TestMultiControlPlane/serial/AddSecondaryNode 70.96
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.91
183 TestJSONOutput/start/Command 79.97
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.8
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.71
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 6.96
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.22
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 79.6
215 TestMountStart/serial/StartWithMountFirst 21.49
216 TestMountStart/serial/VerifyMountFirst 0.39
217 TestMountStart/serial/StartWithMountSecond 24.88
218 TestMountStart/serial/VerifyMountSecond 0.38
219 TestMountStart/serial/DeleteFirst 0.73
220 TestMountStart/serial/VerifyMountPostDelete 0.38
221 TestMountStart/serial/Stop 1.26
222 TestMountStart/serial/RestartStopped 19.47
223 TestMountStart/serial/VerifyMountPostStop 0.4
226 TestMultiNode/serial/FreshStart2Nodes 131.23
227 TestMultiNode/serial/DeployApp2Nodes 6.41
228 TestMultiNode/serial/PingHostFrom2Pods 0.84
229 TestMultiNode/serial/AddNode 42.06
230 TestMultiNode/serial/MultiNodeLabels 0.07
231 TestMultiNode/serial/ProfileList 0.61
232 TestMultiNode/serial/CopyFile 7.59
233 TestMultiNode/serial/StopNode 2.59
234 TestMultiNode/serial/StartAfterStop 39.81
235 TestMultiNode/serial/RestartKeepsNodes 305.33
236 TestMultiNode/serial/DeleteNode 2.9
237 TestMultiNode/serial/StopMultiNode 163.42
238 TestMultiNode/serial/RestartMultiNode 127.07
239 TestMultiNode/serial/ValidateNameConflict 44.02
246 TestScheduledStopUnix 110.96
250 TestRunningBinaryUpgrade 121.15
252 TestKubernetesUpgrade 143.49
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
256 TestNoKubernetes/serial/StartWithK8s 96.91
264 TestNetworkPlugins/group/false 3.82
268 TestNoKubernetes/serial/StartWithStopK8s 33.98
269 TestNoKubernetes/serial/Start 21.82
277 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
278 TestNoKubernetes/serial/ProfileList 1.13
279 TestNoKubernetes/serial/Stop 1.39
280 TestNoKubernetes/serial/StartNoArgs 53.33
281 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
283 TestPause/serial/Start 90.45
284 TestStoppedBinaryUpgrade/Setup 2.62
285 TestStoppedBinaryUpgrade/Upgrade 100.73
286 TestNetworkPlugins/group/auto/Start 90.03
288 TestStoppedBinaryUpgrade/MinikubeLogs 1.16
289 TestNetworkPlugins/group/kindnet/Start 58.62
290 TestNetworkPlugins/group/auto/KubeletFlags 0.26
291 TestNetworkPlugins/group/auto/NetCatPod 11.32
292 TestNetworkPlugins/group/calico/Start 96.02
293 TestNetworkPlugins/group/auto/DNS 0.18
294 TestNetworkPlugins/group/auto/Localhost 0.23
295 TestNetworkPlugins/group/auto/HairPin 0.17
296 TestNetworkPlugins/group/custom-flannel/Start 84.1
297 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
298 TestNetworkPlugins/group/kindnet/KubeletFlags 0.47
299 TestNetworkPlugins/group/kindnet/NetCatPod 11.63
300 TestNetworkPlugins/group/kindnet/DNS 0.17
301 TestNetworkPlugins/group/kindnet/Localhost 0.16
302 TestNetworkPlugins/group/kindnet/HairPin 0.15
303 TestNetworkPlugins/group/enable-default-cni/Start 84.14
304 TestNetworkPlugins/group/flannel/Start 76.31
305 TestNetworkPlugins/group/calico/ControllerPod 6.01
306 TestNetworkPlugins/group/calico/KubeletFlags 0.23
307 TestNetworkPlugins/group/calico/NetCatPod 10.32
308 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
309 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.32
310 TestNetworkPlugins/group/calico/DNS 0.28
311 TestNetworkPlugins/group/calico/Localhost 0.28
312 TestNetworkPlugins/group/calico/HairPin 0.15
313 TestNetworkPlugins/group/custom-flannel/DNS 0.19
314 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
315 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
316 TestNetworkPlugins/group/bridge/Start 84.39
318 TestStartStop/group/old-k8s-version/serial/FirstStart 112.84
319 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
320 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.25
321 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
322 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
323 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
324 TestNetworkPlugins/group/flannel/ControllerPod 6.01
325 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
326 TestNetworkPlugins/group/flannel/NetCatPod 12.28
328 TestStartStop/group/no-preload/serial/FirstStart 108.1
329 TestNetworkPlugins/group/flannel/DNS 0.21
330 TestNetworkPlugins/group/flannel/Localhost 0.19
331 TestNetworkPlugins/group/flannel/HairPin 0.19
333 TestStartStop/group/embed-certs/serial/FirstStart 85.4
334 TestNetworkPlugins/group/bridge/KubeletFlags 0.37
335 TestNetworkPlugins/group/bridge/NetCatPod 11.31
336 TestNetworkPlugins/group/bridge/DNS 0.18
337 TestNetworkPlugins/group/bridge/Localhost 0.15
338 TestNetworkPlugins/group/bridge/HairPin 0.16
340 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 81.88
341 TestStartStop/group/old-k8s-version/serial/DeployApp 10.35
342 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.27
343 TestStartStop/group/old-k8s-version/serial/Stop 75.62
344 TestStartStop/group/no-preload/serial/DeployApp 11.34
345 TestStartStop/group/embed-certs/serial/DeployApp 11.32
346 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.08
347 TestStartStop/group/no-preload/serial/Stop 87.08
348 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.06
349 TestStartStop/group/embed-certs/serial/Stop 89.48
350 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.28
351 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.02
352 TestStartStop/group/default-k8s-diff-port/serial/Stop 86.8
353 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
354 TestStartStop/group/old-k8s-version/serial/SecondStart 45.21
355 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
356 TestStartStop/group/no-preload/serial/SecondStart 57.19
357 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 16.01
358 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
359 TestStartStop/group/embed-certs/serial/SecondStart 58.38
360 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
361 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
362 TestStartStop/group/old-k8s-version/serial/Pause 3.23
364 TestStartStop/group/newest-cni/serial/FirstStart 59.72
365 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 1.15
366 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 63.03
367 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 19.01
368 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 11.01
369 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
370 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
371 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.31
372 TestStartStop/group/embed-certs/serial/Pause 4.26
373 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.44
374 TestStartStop/group/no-preload/serial/Pause 3.73
375 TestStartStop/group/newest-cni/serial/DeployApp 0
376 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.04
377 TestStartStop/group/newest-cni/serial/Stop 11.7
378 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
379 TestStartStop/group/newest-cni/serial/SecondStart 34.4
380 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
381 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
382 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
383 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.01
384 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
385 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
386 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
387 TestStartStop/group/newest-cni/serial/Pause 2.55
x
+
TestDownloadOnly/v1.28.0/json-events (21.83s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-810734 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-810734 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (21.833845973s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (21.83s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1008 14:09:27.867161  361915 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1008 14:09:27.867288  361915 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-357044/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-810734
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-810734: exit status 85 (68.067991ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-810734 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-810734 │ jenkins │ v1.37.0 │ 08 Oct 25 14:09 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 14:09:06
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 14:09:06.080646  361927 out.go:360] Setting OutFile to fd 1 ...
	I1008 14:09:06.080744  361927 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:09:06.080748  361927 out.go:374] Setting ErrFile to fd 2...
	I1008 14:09:06.080752  361927 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:09:06.080934  361927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-357044/.minikube/bin
	W1008 14:09:06.081060  361927 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21681-357044/.minikube/config/config.json: open /home/jenkins/minikube-integration/21681-357044/.minikube/config/config.json: no such file or directory
	I1008 14:09:06.081603  361927 out.go:368] Setting JSON to true
	I1008 14:09:06.082605  361927 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3078,"bootTime":1759929468,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 14:09:06.082719  361927 start.go:141] virtualization: kvm guest
	I1008 14:09:06.085274  361927 out.go:99] [download-only-810734] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 14:09:06.085467  361927 notify.go:220] Checking for updates...
	W1008 14:09:06.085478  361927 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21681-357044/.minikube/cache/preloaded-tarball: no such file or directory
	I1008 14:09:06.087138  361927 out.go:171] MINIKUBE_LOCATION=21681
	I1008 14:09:06.088789  361927 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 14:09:06.090342  361927 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21681-357044/kubeconfig
	I1008 14:09:06.092033  361927 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-357044/.minikube
	I1008 14:09:06.093597  361927 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1008 14:09:06.096412  361927 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1008 14:09:06.096731  361927 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 14:09:06.385794  361927 out.go:99] Using the kvm2 driver based on user configuration
	I1008 14:09:06.385840  361927 start.go:305] selected driver: kvm2
	I1008 14:09:06.385848  361927 start.go:925] validating driver "kvm2" against <nil>
	I1008 14:09:06.386209  361927 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 14:09:06.386302  361927 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21681-357044/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 14:09:06.401127  361927 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1008 14:09:06.401165  361927 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21681-357044/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 14:09:06.416003  361927 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1008 14:09:06.416076  361927 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1008 14:09:06.416655  361927 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1008 14:09:06.416807  361927 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1008 14:09:06.416832  361927 cni.go:84] Creating CNI manager for ""
	I1008 14:09:06.416888  361927 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 14:09:06.416897  361927 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1008 14:09:06.416949  361927 start.go:349] cluster config:
	{Name:download-only-810734 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-810734 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:09:06.417135  361927 iso.go:125] acquiring lock: {Name:mkaa45da6237a5a16f5f1d676ea2e57ba969b9e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 14:09:06.419305  361927 out.go:99] Downloading VM boot image ...
	I1008 14:09:06.419377  361927 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21681-357044/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1008 14:09:16.147851  361927 out.go:99] Starting "download-only-810734" primary control-plane node in "download-only-810734" cluster
	I1008 14:09:16.147892  361927 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1008 14:09:16.243428  361927 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1008 14:09:16.243496  361927 cache.go:58] Caching tarball of preloaded images
	I1008 14:09:16.243786  361927 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1008 14:09:16.245624  361927 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1008 14:09:16.245649  361927 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1008 14:09:16.344632  361927 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1008 14:09:16.344778  361927 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21681-357044/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-810734 host does not exist
	  To start a cluster, run: "minikube start -p download-only-810734"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-810734
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (11.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-422477 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-422477 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (11.154756364s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (11.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1008 14:09:39.382606  361915 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1008 14:09:39.382654  361915 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-357044/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-422477
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-422477: exit status 85 (65.966781ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-810734 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-810734 │ jenkins │ v1.37.0 │ 08 Oct 25 14:09 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                               │ minikube             │ jenkins │ v1.37.0 │ 08 Oct 25 14:09 UTC │ 08 Oct 25 14:09 UTC │
	│ delete  │ -p download-only-810734                                                                                                                                                                             │ download-only-810734 │ jenkins │ v1.37.0 │ 08 Oct 25 14:09 UTC │ 08 Oct 25 14:09 UTC │
	│ start   │ -o=json --download-only -p download-only-422477 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-422477 │ jenkins │ v1.37.0 │ 08 Oct 25 14:09 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 14:09:28
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 14:09:28.271815  362177 out.go:360] Setting OutFile to fd 1 ...
	I1008 14:09:28.272066  362177 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:09:28.272075  362177 out.go:374] Setting ErrFile to fd 2...
	I1008 14:09:28.272079  362177 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:09:28.272287  362177 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-357044/.minikube/bin
	I1008 14:09:28.272809  362177 out.go:368] Setting JSON to true
	I1008 14:09:28.273830  362177 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3100,"bootTime":1759929468,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 14:09:28.273926  362177 start.go:141] virtualization: kvm guest
	I1008 14:09:28.275939  362177 out.go:99] [download-only-422477] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 14:09:28.276152  362177 notify.go:220] Checking for updates...
	I1008 14:09:28.277682  362177 out.go:171] MINIKUBE_LOCATION=21681
	I1008 14:09:28.279347  362177 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 14:09:28.280880  362177 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21681-357044/kubeconfig
	I1008 14:09:28.282432  362177 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-357044/.minikube
	I1008 14:09:28.284193  362177 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1008 14:09:28.286901  362177 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1008 14:09:28.287175  362177 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 14:09:28.321486  362177 out.go:99] Using the kvm2 driver based on user configuration
	I1008 14:09:28.321528  362177 start.go:305] selected driver: kvm2
	I1008 14:09:28.321534  362177 start.go:925] validating driver "kvm2" against <nil>
	I1008 14:09:28.321864  362177 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 14:09:28.321972  362177 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21681-357044/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 14:09:28.336720  362177 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1008 14:09:28.336765  362177 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21681-357044/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1008 14:09:28.351492  362177 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1008 14:09:28.351559  362177 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1008 14:09:28.352339  362177 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1008 14:09:28.352581  362177 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1008 14:09:28.352616  362177 cni.go:84] Creating CNI manager for ""
	I1008 14:09:28.352688  362177 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1008 14:09:28.352701  362177 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1008 14:09:28.352765  362177 start.go:349] cluster config:
	{Name:download-only-422477 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-422477 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:09:28.352903  362177 iso.go:125] acquiring lock: {Name:mkaa45da6237a5a16f5f1d676ea2e57ba969b9e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 14:09:28.354600  362177 out.go:99] Starting "download-only-422477" primary control-plane node in "download-only-422477" cluster
	I1008 14:09:28.354620  362177 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 14:09:28.769019  362177 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1008 14:09:28.769067  362177 cache.go:58] Caching tarball of preloaded images
	I1008 14:09:28.769286  362177 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 14:09:28.771109  362177 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1008 14:09:28.771137  362177 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1008 14:09:28.871099  362177 preload.go:290] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1008 14:09:28.871150  362177 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/21681-357044/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-422477 host does not exist
	  To start a cluster, run: "minikube start -p download-only-422477"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-422477
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.67s)

                                                
                                                
=== RUN   TestBinaryMirror
I1008 14:09:40.040807  361915 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-272062 --alsologtostderr --binary-mirror http://127.0.0.1:39803 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
helpers_test.go:175: Cleaning up "binary-mirror-272062" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-272062
--- PASS: TestBinaryMirror (0.67s)

                                                
                                    
x
+
TestOffline (119.95s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-644334 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-644334 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m58.972243394s)
helpers_test.go:175: Cleaning up "offline-crio-644334" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-644334
--- PASS: TestOffline (119.95s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-527125
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-527125: exit status 85 (55.458109ms)

                                                
                                                
-- stdout --
	* Profile "addons-527125" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-527125"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-527125
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-527125: exit status 85 (56.991535ms)

                                                
                                                
-- stdout --
	* Profile "addons-527125" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-527125"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (200.83s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-527125 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-527125 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m20.828165832s)
--- PASS: TestAddons/Setup (200.83s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-527125 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-527125 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.53s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-527125 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-527125 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [bf129346-930b-4f40-8ca4-15fa8630b971] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [bf129346-930b-4f40-8ca4-15fa8630b971] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.00473226s
addons_test.go:694: (dbg) Run:  kubectl --context addons-527125 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-527125 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-527125 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.53s)

                                                
                                    
x
+
TestAddons/parallel/Registry (20.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 15.265155ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-lhrp9" [8f7fc2f9-1f90-4337-a178-fdb7f65c2522] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.008548527s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-fmht2" [774b61c7-fefe-4387-a25a-6db2c0e46f1a] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.012212688s
addons_test.go:392: (dbg) Run:  kubectl --context addons-527125 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-527125 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-527125 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.25013s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-527125 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-527125 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-527125 addons disable registry --alsologtostderr -v=1: (1.453933061s)
--- PASS: TestAddons/parallel/Registry (20.93s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.96s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 6.110504ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-527125
addons_test.go:332: (dbg) Run:  kubectl --context addons-527125 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-527125 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.96s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.53s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-fcgl2" [73a321d7-cbc8-4559-a441-943c51fd3a20] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004587742s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-527125 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.53s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.52s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 15.801417ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-j2pnk" [c46f59cd-bf89-4732-9756-c81ffea7ce87] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.006412318s
addons_test.go:463: (dbg) Run:  kubectl --context addons-527125 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-527125 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-527125 addons disable metrics-server --alsologtostderr -v=1: (1.411914485s)
--- PASS: TestAddons/parallel/MetricsServer (7.52s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.39s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1008 14:13:27.615191  361915 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1008 14:13:27.623540  361915 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1008 14:13:27.623573  361915 kapi.go:107] duration metric: took 8.395234ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 8.409613ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-527125 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-527125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-527125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-527125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-527125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-527125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-527125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-527125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-527125 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-527125 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [ba77a926-c6a5-47ff-9ca7-81769851a640] Pending
helpers_test.go:352: "task-pv-pod" [ba77a926-c6a5-47ff-9ca7-81769851a640] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [ba77a926-c6a5-47ff-9ca7-81769851a640] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 17.042827196s
addons_test.go:572: (dbg) Run:  kubectl --context addons-527125 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-527125 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-527125 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-527125 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-527125 delete pod task-pv-pod: (1.30238105s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-527125 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-527125 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-527125 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-527125 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-527125 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-527125 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-527125 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-527125 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-527125 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [abfe2979-7345-4170-b962-13e4fd16fb25] Pending
helpers_test.go:352: "task-pv-pod-restore" [abfe2979-7345-4170-b962-13e4fd16fb25] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [abfe2979-7345-4170-b962-13e4fd16fb25] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004295454s
addons_test.go:614: (dbg) Run:  kubectl --context addons-527125 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-527125 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-527125 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-527125 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-527125 addons disable volumesnapshots --alsologtostderr -v=1: (1.031065533s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-527125 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-527125 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.013725683s)
--- PASS: TestAddons/parallel/CSI (49.39s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-527125 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-527125 --alsologtostderr -v=1: (1.005308561s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-95ffh" [a82e6560-8ead-4ea1-abb4-5020e9265ad7] Pending
helpers_test.go:352: "headlamp-6945c6f4d-95ffh" [a82e6560-8ead-4ea1-abb4-5020e9265ad7] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-95ffh" [a82e6560-8ead-4ea1-abb4-5020e9265ad7] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004576978s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-527125 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-527125 addons disable headlamp --alsologtostderr -v=1: (6.700616152s)
--- PASS: TestAddons/parallel/Headlamp (20.71s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (7.02s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-qg4td" [e7d01947-b740-412a-ad46-ecac4aa34c69] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004773483s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-527125 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-527125 addons disable cloud-spanner --alsologtostderr -v=1: (1.000548299s)
--- PASS: TestAddons/parallel/CloudSpanner (7.02s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.2s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-527125 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-527125 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-527125 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-527125 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-527125 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-527125 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-527125 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-527125 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-527125 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-527125 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [05228246-3f38-4eaa-85f0-8afa88a5a757] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [05228246-3f38-4eaa-85f0-8afa88a5a757] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [05228246-3f38-4eaa-85f0-8afa88a5a757] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004910167s
addons_test.go:967: (dbg) Run:  kubectl --context addons-527125 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-527125 ssh "cat /opt/local-path-provisioner/pvc-58128b55-1842-4e77-9262-1af4a121e42e_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-527125 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-527125 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-527125 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-527125 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.238440522s)
--- PASS: TestAddons/parallel/LocalPath (56.20s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.93s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-2tj86" [1b591317-30f5-433d-a13f-035b3c173ac6] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.007126197s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-527125 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.93s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-xrv5n" [8ba42c54-a032-49f3-82a2-81541cf30214] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.007705769s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-527125 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-527125 addons disable yakd --alsologtostderr -v=1: (6.279354343s)
--- PASS: TestAddons/parallel/Yakd (12.29s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (83.89s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-527125
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-527125: (1m23.590719159s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-527125
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-527125
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-527125
--- PASS: TestAddons/StoppedEnableDisable (83.89s)

                                                
                                    
x
+
TestCertOptions (82.75s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-181493 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1008 15:07:45.395827  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-181493 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m21.035950087s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-181493 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-181493 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-181493 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-181493" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-181493
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-181493: (1.198300093s)
--- PASS: TestCertOptions (82.75s)

                                                
                                    
x
+
TestCertExpiration (409.28s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-189103 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-189103 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m2.790142851s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-189103 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-189103 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (2m45.517556769s)
helpers_test.go:175: Cleaning up "cert-expiration-189103" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-189103
--- PASS: TestCertExpiration (409.28s)

                                                
                                    
x
+
TestForceSystemdFlag (60.42s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-732844 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-732844 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (59.255136292s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-732844 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-732844" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-732844
--- PASS: TestForceSystemdFlag (60.42s)

                                                
                                    
x
+
TestForceSystemdEnv (41.65s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-737478 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-737478 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (40.641541801s)
helpers_test.go:175: Cleaning up "force-systemd-env-737478" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-737478
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-737478: (1.012129129s)
--- PASS: TestForceSystemdEnv (41.65s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.2s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1008 15:07:06.434805  361915 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1008 15:07:06.435040  361915 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3350183951/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1008 15:07:06.467086  361915 install.go:163] /tmp/TestKVMDriverInstallOrUpdate3350183951/001/docker-machine-driver-kvm2 version is 1.1.1
W1008 15:07:06.467146  361915 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W1008 15:07:06.467419  361915 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1008 15:07:06.467496  361915 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3350183951/001/docker-machine-driver-kvm2
I1008 15:07:07.476182  361915 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3350183951/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1008 15:07:07.494388  361915 install.go:163] /tmp/TestKVMDriverInstallOrUpdate3350183951/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (1.20s)

                                                
                                    
x
+
TestErrorSpam/setup (37.23s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-166072 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-166072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1008 14:18:02.318839  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:18:02.325430  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:18:02.336877  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:18:02.358363  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:18:02.399883  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:18:02.481465  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:18:02.643101  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:18:02.964893  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:18:03.606987  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:18:04.888526  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:18:07.451529  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:18:12.573010  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:18:22.814750  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-166072 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-166072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (37.234377861s)
--- PASS: TestErrorSpam/setup (37.23s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-166072 --log_dir /tmp/nospam-166072 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-166072 --log_dir /tmp/nospam-166072 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-166072 --log_dir /tmp/nospam-166072 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-166072 --log_dir /tmp/nospam-166072 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-166072 --log_dir /tmp/nospam-166072 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-166072 --log_dir /tmp/nospam-166072 status
--- PASS: TestErrorSpam/status (0.84s)

                                                
                                    
x
+
TestErrorSpam/pause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-166072 --log_dir /tmp/nospam-166072 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-166072 --log_dir /tmp/nospam-166072 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-166072 --log_dir /tmp/nospam-166072 pause
--- PASS: TestErrorSpam/pause (1.72s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.06s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-166072 --log_dir /tmp/nospam-166072 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-166072 --log_dir /tmp/nospam-166072 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-166072 --log_dir /tmp/nospam-166072 unpause
--- PASS: TestErrorSpam/unpause (2.06s)

                                                
                                    
x
+
TestErrorSpam/stop (4.28s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-166072 --log_dir /tmp/nospam-166072 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-166072 --log_dir /tmp/nospam-166072 stop: (2.11463337s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-166072 --log_dir /tmp/nospam-166072 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-166072 --log_dir /tmp/nospam-166072 stop: (1.087583887s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-166072 --log_dir /tmp/nospam-166072 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-166072 --log_dir /tmp/nospam-166072 stop: (1.073369029s)
--- PASS: TestErrorSpam/stop (4.28s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21681-357044/.minikube/files/etc/test/nested/copy/361915/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.44s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-882741 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1008 14:18:43.296918  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:19:24.259504  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-882741 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m19.440264849s)
--- PASS: TestFunctional/serial/StartWithProxy (79.44s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.33s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1008 14:19:55.549340  361915 config.go:182] Loaded profile config "functional-882741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-882741 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-882741 --alsologtostderr -v=8: (35.326605715s)
functional_test.go:678: soft start took 35.327429962s for "functional-882741" cluster.
I1008 14:20:30.876420  361915 config.go:182] Loaded profile config "functional-882741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (35.33s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-882741 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-882741 cache add registry.k8s.io/pause:3.1: (1.135909232s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-882741 cache add registry.k8s.io/pause:3.3: (1.280236481s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-882741 cache add registry.k8s.io/pause:latest: (1.169401905s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-882741 /tmp/TestFunctionalserialCacheCmdcacheadd_local1326483802/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 cache add minikube-local-cache-test:functional-882741
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-882741 cache add minikube-local-cache-test:functional-882741: (1.81028173s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 cache delete minikube-local-cache-test:functional-882741
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-882741
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.75s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882741 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (222.262691ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-882741 cache reload: (1.034182739s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.75s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 kubectl -- --context functional-882741 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-882741 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (49.42s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-882741 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1008 14:20:46.183520  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-882741 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (49.419063449s)
functional_test.go:776: restart took 49.419219473s for "functional-882741" cluster.
I1008 14:21:28.589671  361915 config.go:182] Loaded profile config "functional-882741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (49.42s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-882741 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-882741 logs: (1.501655484s)
--- PASS: TestFunctional/serial/LogsCmd (1.50s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 logs --file /tmp/TestFunctionalserialLogsFileCmd2219441801/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-882741 logs --file /tmp/TestFunctionalserialLogsFileCmd2219441801/001/logs.txt: (1.509347932s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.29s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-882741 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-882741
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-882741: exit status 115 (292.146339ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.97:32342 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-882741 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.29s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882741 config get cpus: exit status 14 (52.799945ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882741 config get cpus: exit status 14 (55.863633ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (32.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-882741 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-882741 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 370728: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (32.97s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-882741 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-882741 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (185.270972ms)

                                                
                                                
-- stdout --
	* [functional-882741] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21681
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21681-357044/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-357044/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 14:21:47.910656  370465 out.go:360] Setting OutFile to fd 1 ...
	I1008 14:21:47.911069  370465 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:21:47.911081  370465 out.go:374] Setting ErrFile to fd 2...
	I1008 14:21:47.911088  370465 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:21:47.911417  370465 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-357044/.minikube/bin
	I1008 14:21:47.911990  370465 out.go:368] Setting JSON to false
	I1008 14:21:47.913076  370465 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3840,"bootTime":1759929468,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 14:21:47.913198  370465 start.go:141] virtualization: kvm guest
	I1008 14:21:47.918915  370465 out.go:179] * [functional-882741] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 14:21:47.921131  370465 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 14:21:47.921165  370465 notify.go:220] Checking for updates...
	I1008 14:21:47.924474  370465 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 14:21:47.926010  370465 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-357044/kubeconfig
	I1008 14:21:47.927524  370465 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-357044/.minikube
	I1008 14:21:47.928885  370465 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 14:21:47.930436  370465 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 14:21:47.932152  370465 config.go:182] Loaded profile config "functional-882741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 14:21:47.932668  370465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:21:47.932728  370465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:21:47.953014  370465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41551
	I1008 14:21:47.953579  370465 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:21:47.954139  370465 main.go:141] libmachine: Using API Version  1
	I1008 14:21:47.954159  370465 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:21:47.954641  370465 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:21:47.954911  370465 main.go:141] libmachine: (functional-882741) Calling .DriverName
	I1008 14:21:47.955206  370465 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 14:21:47.955666  370465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:21:47.955724  370465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:21:47.974393  370465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39227
	I1008 14:21:47.975030  370465 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:21:47.975824  370465 main.go:141] libmachine: Using API Version  1
	I1008 14:21:47.975865  370465 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:21:47.976314  370465 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:21:47.976625  370465 main.go:141] libmachine: (functional-882741) Calling .DriverName
	I1008 14:21:48.023514  370465 out.go:179] * Using the kvm2 driver based on existing profile
	I1008 14:21:48.025079  370465 start.go:305] selected driver: kvm2
	I1008 14:21:48.025103  370465 start.go:925] validating driver "kvm2" against &{Name:functional-882741 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-882741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:21:48.025248  370465 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 14:21:48.027661  370465 out.go:203] 
	W1008 14:21:48.028996  370465 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1008 14:21:48.030506  370465 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-882741 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
--- PASS: TestFunctional/parallel/DryRun (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-882741 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-882741 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (170.574167ms)

                                                
                                                
-- stdout --
	* [functional-882741] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21681
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21681-357044/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-357044/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 14:21:48.254776  370588 out.go:360] Setting OutFile to fd 1 ...
	I1008 14:21:48.255077  370588 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:21:48.255093  370588 out.go:374] Setting ErrFile to fd 2...
	I1008 14:21:48.255100  370588 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:21:48.255590  370588 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-357044/.minikube/bin
	I1008 14:21:48.256222  370588 out.go:368] Setting JSON to false
	I1008 14:21:48.257718  370588 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3840,"bootTime":1759929468,"procs":250,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 14:21:48.257834  370588 start.go:141] virtualization: kvm guest
	I1008 14:21:48.259811  370588 out.go:179] * [functional-882741] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1008 14:21:48.261369  370588 notify.go:220] Checking for updates...
	I1008 14:21:48.261391  370588 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 14:21:48.262806  370588 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 14:21:48.264097  370588 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-357044/kubeconfig
	I1008 14:21:48.265495  370588 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-357044/.minikube
	I1008 14:21:48.266864  370588 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 14:21:48.268226  370588 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 14:21:48.270319  370588 config.go:182] Loaded profile config "functional-882741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 14:21:48.271025  370588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:21:48.271092  370588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:21:48.292076  370588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38573
	I1008 14:21:48.292837  370588 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:21:48.293558  370588 main.go:141] libmachine: Using API Version  1
	I1008 14:21:48.293618  370588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:21:48.294063  370588 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:21:48.294274  370588 main.go:141] libmachine: (functional-882741) Calling .DriverName
	I1008 14:21:48.294576  370588 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 14:21:48.295054  370588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:21:48.295106  370588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:21:48.311266  370588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36569
	I1008 14:21:48.311931  370588 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:21:48.312483  370588 main.go:141] libmachine: Using API Version  1
	I1008 14:21:48.312511  370588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:21:48.312995  370588 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:21:48.313196  370588 main.go:141] libmachine: (functional-882741) Calling .DriverName
	I1008 14:21:48.352365  370588 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1008 14:21:48.353907  370588 start.go:305] selected driver: kvm2
	I1008 14:21:48.353930  370588 start.go:925] validating driver "kvm2" against &{Name:functional-882741 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-882741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 14:21:48.354082  370588 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 14:21:48.356665  370588 out.go:203] 
	W1008 14:21:48.358127  370588 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1008 14:21:48.359479  370588 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-882741 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-882741 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-z58t2" [58340897-e60e-4dad-a046-c8b6c88403c2] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-z58t2" [58340897-e60e-4dad-a046-c8b6c88403c2] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.010599705s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.97:31167
functional_test.go:1680: http://192.168.39.97:31167: success! body:
Request served by hello-node-connect-7d85dfc575-z58t2

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.97:31167
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.62s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (48.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [4ac45313-f86b-41e8-b83b-0dca2da4a022] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004130523s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-882741 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-882741 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-882741 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-882741 apply -f testdata/storage-provisioner/pod.yaml
I1008 14:21:43.401898  361915 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [b3fcd047-d30e-4614-90c0-94975502e45e] Pending
helpers_test.go:352: "sp-pod" [b3fcd047-d30e-4614-90c0-94975502e45e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [b3fcd047-d30e-4614-90c0-94975502e45e] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.081549427s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-882741 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-882741 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-882741 delete -f testdata/storage-provisioner/pod.yaml: (2.732657343s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-882741 apply -f testdata/storage-provisioner/pod.yaml
I1008 14:22:01.540908  361915 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [bbd0718f-00f0-4308-96d5-0517b60ace28] Pending
helpers_test.go:352: "sp-pod" [bbd0718f-00f0-4308-96d5-0517b60ace28] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [bbd0718f-00f0-4308-96d5-0517b60ace28] Running
2025/10/08 14:22:20 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 24.004887246s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-882741 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (48.70s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 ssh -n functional-882741 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 cp functional-882741:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1906815925/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 ssh -n functional-882741 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 ssh -n functional-882741 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-882741 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-plj5t" [cab3af6e-2f29-41e5-b363-88a0aa58c6b7] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-plj5t" [cab3af6e-2f29-41e5-b363-88a0aa58c6b7] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.01945849s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-882741 exec mysql-5bb876957f-plj5t -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-882741 exec mysql-5bb876957f-plj5t -- mysql -ppassword -e "show databases;": exit status 1 (403.488126ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1008 14:22:10.037116  361915 retry.go:31] will retry after 711.626372ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-882741 exec mysql-5bb876957f-plj5t -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-882741 exec mysql-5bb876957f-plj5t -- mysql -ppassword -e "show databases;": exit status 1 (520.83183ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1008 14:22:11.270496  361915 retry.go:31] will retry after 980.971744ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-882741 exec mysql-5bb876957f-plj5t -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-882741 exec mysql-5bb876957f-plj5t -- mysql -ppassword -e "show databases;": exit status 1 (249.461169ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1008 14:22:12.502248  361915 retry.go:31] will retry after 1.703099836s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-882741 exec mysql-5bb876957f-plj5t -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.05s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/361915/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 ssh "sudo cat /etc/test/nested/copy/361915/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/361915.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 ssh "sudo cat /etc/ssl/certs/361915.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/361915.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 ssh "sudo cat /usr/share/ca-certificates/361915.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3619152.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 ssh "sudo cat /etc/ssl/certs/3619152.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3619152.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 ssh "sudo cat /usr/share/ca-certificates/3619152.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-882741 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882741 ssh "sudo systemctl is-active docker": exit status 1 (234.70126ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882741 ssh "sudo systemctl is-active containerd": exit status 1 (233.197995ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-882741 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-882741
localhost/kicbase/echo-server:functional-882741
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-882741 image ls --format short --alsologtostderr:
I1008 14:22:14.149650  371392 out.go:360] Setting OutFile to fd 1 ...
I1008 14:22:14.149982  371392 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 14:22:14.149995  371392 out.go:374] Setting ErrFile to fd 2...
I1008 14:22:14.150001  371392 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 14:22:14.150387  371392 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-357044/.minikube/bin
I1008 14:22:14.151264  371392 config.go:182] Loaded profile config "functional-882741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1008 14:22:14.151436  371392 config.go:182] Loaded profile config "functional-882741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1008 14:22:14.151878  371392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1008 14:22:14.151963  371392 main.go:141] libmachine: Launching plugin server for driver kvm2
I1008 14:22:14.167672  371392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46325
I1008 14:22:14.168204  371392 main.go:141] libmachine: () Calling .GetVersion
I1008 14:22:14.168830  371392 main.go:141] libmachine: Using API Version  1
I1008 14:22:14.168859  371392 main.go:141] libmachine: () Calling .SetConfigRaw
I1008 14:22:14.169274  371392 main.go:141] libmachine: () Calling .GetMachineName
I1008 14:22:14.169526  371392 main.go:141] libmachine: (functional-882741) Calling .GetState
I1008 14:22:14.171906  371392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1008 14:22:14.171955  371392 main.go:141] libmachine: Launching plugin server for driver kvm2
I1008 14:22:14.186529  371392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42057
I1008 14:22:14.187015  371392 main.go:141] libmachine: () Calling .GetVersion
I1008 14:22:14.187499  371392 main.go:141] libmachine: Using API Version  1
I1008 14:22:14.187522  371392 main.go:141] libmachine: () Calling .SetConfigRaw
I1008 14:22:14.187903  371392 main.go:141] libmachine: () Calling .GetMachineName
I1008 14:22:14.188127  371392 main.go:141] libmachine: (functional-882741) Calling .DriverName
I1008 14:22:14.188328  371392 ssh_runner.go:195] Run: systemctl --version
I1008 14:22:14.188366  371392 main.go:141] libmachine: (functional-882741) Calling .GetSSHHostname
I1008 14:22:14.192222  371392 main.go:141] libmachine: (functional-882741) DBG | domain functional-882741 has defined MAC address 52:54:00:1e:2e:e9 in network mk-functional-882741
I1008 14:22:14.192759  371392 main.go:141] libmachine: (functional-882741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:e9", ip: ""} in network mk-functional-882741: {Iface:virbr1 ExpiryTime:2025-10-08 15:18:51 +0000 UTC Type:0 Mac:52:54:00:1e:2e:e9 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:functional-882741 Clientid:01:52:54:00:1e:2e:e9}
I1008 14:22:14.192792  371392 main.go:141] libmachine: (functional-882741) DBG | domain functional-882741 has defined IP address 192.168.39.97 and MAC address 52:54:00:1e:2e:e9 in network mk-functional-882741
I1008 14:22:14.193028  371392 main.go:141] libmachine: (functional-882741) Calling .GetSSHPort
I1008 14:22:14.193210  371392 main.go:141] libmachine: (functional-882741) Calling .GetSSHKeyPath
I1008 14:22:14.193408  371392 main.go:141] libmachine: (functional-882741) Calling .GetSSHUsername
I1008 14:22:14.193609  371392 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/functional-882741/id_rsa Username:docker}
I1008 14:22:14.278149  371392 ssh_runner.go:195] Run: sudo crictl images --output json
I1008 14:22:14.385501  371392 main.go:141] libmachine: Making call to close driver server
I1008 14:22:14.385532  371392 main.go:141] libmachine: (functional-882741) Calling .Close
I1008 14:22:14.385904  371392 main.go:141] libmachine: Successfully made call to close driver server
I1008 14:22:14.385928  371392 main.go:141] libmachine: Making call to close connection to plugin binary
I1008 14:22:14.385940  371392 main.go:141] libmachine: Making call to close driver server
I1008 14:22:14.385953  371392 main.go:141] libmachine: (functional-882741) Calling .Close
I1008 14:22:14.386164  371392 main.go:141] libmachine: Successfully made call to close driver server
I1008 14:22:14.386187  371392 main.go:141] libmachine: Making call to close connection to plugin binary
I1008 14:22:14.386223  371392 main.go:141] libmachine: (functional-882741) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-882741 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/nginx                 │ latest             │ 07ccdb7838758 │ 164MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/minikube-local-cache-test     │ functional-882741  │ 026437b442c4c │ 3.33kB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-882741  │ 9056ab77afb8e │ 4.95MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-882741 image ls --format table --alsologtostderr:
I1008 14:22:15.007478  371541 out.go:360] Setting OutFile to fd 1 ...
I1008 14:22:15.007802  371541 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 14:22:15.007816  371541 out.go:374] Setting ErrFile to fd 2...
I1008 14:22:15.007820  371541 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 14:22:15.008046  371541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-357044/.minikube/bin
I1008 14:22:15.008667  371541 config.go:182] Loaded profile config "functional-882741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1008 14:22:15.008771  371541 config.go:182] Loaded profile config "functional-882741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1008 14:22:15.009189  371541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1008 14:22:15.009259  371541 main.go:141] libmachine: Launching plugin server for driver kvm2
I1008 14:22:15.024605  371541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42987
I1008 14:22:15.025228  371541 main.go:141] libmachine: () Calling .GetVersion
I1008 14:22:15.025860  371541 main.go:141] libmachine: Using API Version  1
I1008 14:22:15.025917  371541 main.go:141] libmachine: () Calling .SetConfigRaw
I1008 14:22:15.026283  371541 main.go:141] libmachine: () Calling .GetMachineName
I1008 14:22:15.026551  371541 main.go:141] libmachine: (functional-882741) Calling .GetState
I1008 14:22:15.029053  371541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1008 14:22:15.029106  371541 main.go:141] libmachine: Launching plugin server for driver kvm2
I1008 14:22:15.043891  371541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43867
I1008 14:22:15.044424  371541 main.go:141] libmachine: () Calling .GetVersion
I1008 14:22:15.044975  371541 main.go:141] libmachine: Using API Version  1
I1008 14:22:15.045005  371541 main.go:141] libmachine: () Calling .SetConfigRaw
I1008 14:22:15.045444  371541 main.go:141] libmachine: () Calling .GetMachineName
I1008 14:22:15.045697  371541 main.go:141] libmachine: (functional-882741) Calling .DriverName
I1008 14:22:15.045948  371541 ssh_runner.go:195] Run: systemctl --version
I1008 14:22:15.045980  371541 main.go:141] libmachine: (functional-882741) Calling .GetSSHHostname
I1008 14:22:15.049934  371541 main.go:141] libmachine: (functional-882741) DBG | domain functional-882741 has defined MAC address 52:54:00:1e:2e:e9 in network mk-functional-882741
I1008 14:22:15.050437  371541 main.go:141] libmachine: (functional-882741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:e9", ip: ""} in network mk-functional-882741: {Iface:virbr1 ExpiryTime:2025-10-08 15:18:51 +0000 UTC Type:0 Mac:52:54:00:1e:2e:e9 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:functional-882741 Clientid:01:52:54:00:1e:2e:e9}
I1008 14:22:15.050475  371541 main.go:141] libmachine: (functional-882741) DBG | domain functional-882741 has defined IP address 192.168.39.97 and MAC address 52:54:00:1e:2e:e9 in network mk-functional-882741
I1008 14:22:15.050666  371541 main.go:141] libmachine: (functional-882741) Calling .GetSSHPort
I1008 14:22:15.050882  371541 main.go:141] libmachine: (functional-882741) Calling .GetSSHKeyPath
I1008 14:22:15.051082  371541 main.go:141] libmachine: (functional-882741) Calling .GetSSHUsername
I1008 14:22:15.051288  371541 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/functional-882741/id_rsa Username:docker}
I1008 14:22:15.134206  371541 ssh_runner.go:195] Run: sudo crictl images --output json
I1008 14:22:15.174944  371541 main.go:141] libmachine: Making call to close driver server
I1008 14:22:15.174961  371541 main.go:141] libmachine: (functional-882741) Calling .Close
I1008 14:22:15.175293  371541 main.go:141] libmachine: Successfully made call to close driver server
I1008 14:22:15.175312  371541 main.go:141] libmachine: Making call to close connection to plugin binary
I1008 14:22:15.175322  371541 main.go:141] libmachine: Making call to close driver server
I1008 14:22:15.175330  371541 main.go:141] libmachine: (functional-882741) Calling .Close
I1008 14:22:15.175563  371541 main.go:141] libmachine: Successfully made call to close driver server
I1008 14:22:15.175581  371541 main.go:141] libmachine: Making call to close connection to plugin binary
I1008 14:22:15.175659  371541 main.go:141] libmachine: (functional-882741) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-882741 image ls --format json --alsologtostderr:
[{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899
ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-882741"],"size":"4945246"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d16650
1de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"026437b442c4c06aead9de44d4b949a3f5f74d55235686479cd6888177b2a061","repoDigests":["localhost/minikube-local-cache-test@sha256:7c62aa38890797172146061c8388192
5cfc72a7839c8354a5d03734e38ba1dab"],"repoTags":["localhost/minikube-local-cache-test:functional-882741"],"size":"3330"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe00
77a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938","repoDigests":["docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115","docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6"],"repoTags":["docker.io/library/nginx:latest"],"size":"163615579"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry
.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"]
,"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-882741 image ls --format json --alsologtostderr:
I1008 14:22:14.714034  371485 out.go:360] Setting OutFile to fd 1 ...
I1008 14:22:14.714276  371485 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 14:22:14.714281  371485 out.go:374] Setting ErrFile to fd 2...
I1008 14:22:14.714285  371485 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 14:22:14.714494  371485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-357044/.minikube/bin
I1008 14:22:14.715060  371485 config.go:182] Loaded profile config "functional-882741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1008 14:22:14.715145  371485 config.go:182] Loaded profile config "functional-882741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1008 14:22:14.715552  371485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1008 14:22:14.715657  371485 main.go:141] libmachine: Launching plugin server for driver kvm2
I1008 14:22:14.730426  371485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33095
I1008 14:22:14.730916  371485 main.go:141] libmachine: () Calling .GetVersion
I1008 14:22:14.731498  371485 main.go:141] libmachine: Using API Version  1
I1008 14:22:14.731537  371485 main.go:141] libmachine: () Calling .SetConfigRaw
I1008 14:22:14.731922  371485 main.go:141] libmachine: () Calling .GetMachineName
I1008 14:22:14.732132  371485 main.go:141] libmachine: (functional-882741) Calling .GetState
I1008 14:22:14.734384  371485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1008 14:22:14.734438  371485 main.go:141] libmachine: Launching plugin server for driver kvm2
I1008 14:22:14.749315  371485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41005
I1008 14:22:14.749788  371485 main.go:141] libmachine: () Calling .GetVersion
I1008 14:22:14.750324  371485 main.go:141] libmachine: Using API Version  1
I1008 14:22:14.750368  371485 main.go:141] libmachine: () Calling .SetConfigRaw
I1008 14:22:14.750712  371485 main.go:141] libmachine: () Calling .GetMachineName
I1008 14:22:14.750922  371485 main.go:141] libmachine: (functional-882741) Calling .DriverName
I1008 14:22:14.751206  371485 ssh_runner.go:195] Run: systemctl --version
I1008 14:22:14.751236  371485 main.go:141] libmachine: (functional-882741) Calling .GetSSHHostname
I1008 14:22:14.755042  371485 main.go:141] libmachine: (functional-882741) DBG | domain functional-882741 has defined MAC address 52:54:00:1e:2e:e9 in network mk-functional-882741
I1008 14:22:14.755500  371485 main.go:141] libmachine: (functional-882741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:e9", ip: ""} in network mk-functional-882741: {Iface:virbr1 ExpiryTime:2025-10-08 15:18:51 +0000 UTC Type:0 Mac:52:54:00:1e:2e:e9 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:functional-882741 Clientid:01:52:54:00:1e:2e:e9}
I1008 14:22:14.755544  371485 main.go:141] libmachine: (functional-882741) DBG | domain functional-882741 has defined IP address 192.168.39.97 and MAC address 52:54:00:1e:2e:e9 in network mk-functional-882741
I1008 14:22:14.755684  371485 main.go:141] libmachine: (functional-882741) Calling .GetSSHPort
I1008 14:22:14.755845  371485 main.go:141] libmachine: (functional-882741) Calling .GetSSHKeyPath
I1008 14:22:14.756001  371485 main.go:141] libmachine: (functional-882741) Calling .GetSSHUsername
I1008 14:22:14.756146  371485 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/functional-882741/id_rsa Username:docker}
I1008 14:22:14.858057  371485 ssh_runner.go:195] Run: sudo crictl images --output json
I1008 14:22:14.947997  371485 main.go:141] libmachine: Making call to close driver server
I1008 14:22:14.948020  371485 main.go:141] libmachine: (functional-882741) Calling .Close
I1008 14:22:14.948335  371485 main.go:141] libmachine: Successfully made call to close driver server
I1008 14:22:14.948372  371485 main.go:141] libmachine: Making call to close connection to plugin binary
I1008 14:22:14.948386  371485 main.go:141] libmachine: Making call to close driver server
I1008 14:22:14.948393  371485 main.go:141] libmachine: (functional-882741) Calling .Close
I1008 14:22:14.948421  371485 main.go:141] libmachine: (functional-882741) DBG | Closing plugin on server side
I1008 14:22:14.948726  371485 main.go:141] libmachine: Successfully made call to close driver server
I1008 14:22:14.948735  371485 main.go:141] libmachine: (functional-882741) DBG | Closing plugin on server side
I1008 14:22:14.948748  371485 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-882741 image ls --format yaml --alsologtostderr:
- id: 07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938
repoDigests:
- docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115
- docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6
repoTags:
- docker.io/library/nginx:latest
size: "163615579"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-882741
size: "4945246"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 026437b442c4c06aead9de44d4b949a3f5f74d55235686479cd6888177b2a061
repoDigests:
- localhost/minikube-local-cache-test@sha256:7c62aa38890797172146061c83881925cfc72a7839c8354a5d03734e38ba1dab
repoTags:
- localhost/minikube-local-cache-test:functional-882741
size: "3330"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-882741 image ls --format yaml --alsologtostderr:
I1008 14:22:14.446154  371426 out.go:360] Setting OutFile to fd 1 ...
I1008 14:22:14.446516  371426 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 14:22:14.446531  371426 out.go:374] Setting ErrFile to fd 2...
I1008 14:22:14.446537  371426 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 14:22:14.446964  371426 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-357044/.minikube/bin
I1008 14:22:14.447865  371426 config.go:182] Loaded profile config "functional-882741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1008 14:22:14.448017  371426 config.go:182] Loaded profile config "functional-882741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1008 14:22:14.448611  371426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1008 14:22:14.448705  371426 main.go:141] libmachine: Launching plugin server for driver kvm2
I1008 14:22:14.465326  371426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43545
I1008 14:22:14.465980  371426 main.go:141] libmachine: () Calling .GetVersion
I1008 14:22:14.466745  371426 main.go:141] libmachine: Using API Version  1
I1008 14:22:14.466783  371426 main.go:141] libmachine: () Calling .SetConfigRaw
I1008 14:22:14.467378  371426 main.go:141] libmachine: () Calling .GetMachineName
I1008 14:22:14.467731  371426 main.go:141] libmachine: (functional-882741) Calling .GetState
I1008 14:22:14.470810  371426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1008 14:22:14.470877  371426 main.go:141] libmachine: Launching plugin server for driver kvm2
I1008 14:22:14.485991  371426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35217
I1008 14:22:14.486496  371426 main.go:141] libmachine: () Calling .GetVersion
I1008 14:22:14.487087  371426 main.go:141] libmachine: Using API Version  1
I1008 14:22:14.487125  371426 main.go:141] libmachine: () Calling .SetConfigRaw
I1008 14:22:14.487529  371426 main.go:141] libmachine: () Calling .GetMachineName
I1008 14:22:14.487773  371426 main.go:141] libmachine: (functional-882741) Calling .DriverName
I1008 14:22:14.488015  371426 ssh_runner.go:195] Run: systemctl --version
I1008 14:22:14.488041  371426 main.go:141] libmachine: (functional-882741) Calling .GetSSHHostname
I1008 14:22:14.492107  371426 main.go:141] libmachine: (functional-882741) DBG | domain functional-882741 has defined MAC address 52:54:00:1e:2e:e9 in network mk-functional-882741
I1008 14:22:14.493052  371426 main.go:141] libmachine: (functional-882741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:e9", ip: ""} in network mk-functional-882741: {Iface:virbr1 ExpiryTime:2025-10-08 15:18:51 +0000 UTC Type:0 Mac:52:54:00:1e:2e:e9 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:functional-882741 Clientid:01:52:54:00:1e:2e:e9}
I1008 14:22:14.493086  371426 main.go:141] libmachine: (functional-882741) DBG | domain functional-882741 has defined IP address 192.168.39.97 and MAC address 52:54:00:1e:2e:e9 in network mk-functional-882741
I1008 14:22:14.493274  371426 main.go:141] libmachine: (functional-882741) Calling .GetSSHPort
I1008 14:22:14.493552  371426 main.go:141] libmachine: (functional-882741) Calling .GetSSHKeyPath
I1008 14:22:14.493772  371426 main.go:141] libmachine: (functional-882741) Calling .GetSSHUsername
I1008 14:22:14.493969  371426 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/functional-882741/id_rsa Username:docker}
I1008 14:22:14.597538  371426 ssh_runner.go:195] Run: sudo crictl images --output json
I1008 14:22:14.656841  371426 main.go:141] libmachine: Making call to close driver server
I1008 14:22:14.656858  371426 main.go:141] libmachine: (functional-882741) Calling .Close
I1008 14:22:14.657156  371426 main.go:141] libmachine: Successfully made call to close driver server
I1008 14:22:14.657172  371426 main.go:141] libmachine: Making call to close connection to plugin binary
I1008 14:22:14.657182  371426 main.go:141] libmachine: Making call to close driver server
I1008 14:22:14.657189  371426 main.go:141] libmachine: (functional-882741) Calling .Close
I1008 14:22:14.657430  371426 main.go:141] libmachine: Successfully made call to close driver server
I1008 14:22:14.657446  371426 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882741 ssh pgrep buildkitd: exit status 1 (243.969527ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 image build -t localhost/my-image:functional-882741 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-882741 image build -t localhost/my-image:functional-882741 testdata/build --alsologtostderr: (3.859800611s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-882741 image build -t localhost/my-image:functional-882741 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> f618e3f8c84
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-882741
--> 660d3938a0c
Successfully tagged localhost/my-image:functional-882741
660d3938a0c46dce41d059c8c1a69f0386c614c0b1ed9f6742a005d05d44d188
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-882741 image build -t localhost/my-image:functional-882741 testdata/build --alsologtostderr:
I1008 14:22:14.709586  371479 out.go:360] Setting OutFile to fd 1 ...
I1008 14:22:14.709877  371479 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 14:22:14.709888  371479 out.go:374] Setting ErrFile to fd 2...
I1008 14:22:14.709896  371479 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 14:22:14.710102  371479 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-357044/.minikube/bin
I1008 14:22:14.710723  371479 config.go:182] Loaded profile config "functional-882741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1008 14:22:14.711510  371479 config.go:182] Loaded profile config "functional-882741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1008 14:22:14.711939  371479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1008 14:22:14.711982  371479 main.go:141] libmachine: Launching plugin server for driver kvm2
I1008 14:22:14.726428  371479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37893
I1008 14:22:14.727041  371479 main.go:141] libmachine: () Calling .GetVersion
I1008 14:22:14.727643  371479 main.go:141] libmachine: Using API Version  1
I1008 14:22:14.727672  371479 main.go:141] libmachine: () Calling .SetConfigRaw
I1008 14:22:14.728076  371479 main.go:141] libmachine: () Calling .GetMachineName
I1008 14:22:14.728320  371479 main.go:141] libmachine: (functional-882741) Calling .GetState
I1008 14:22:14.730977  371479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1008 14:22:14.731031  371479 main.go:141] libmachine: Launching plugin server for driver kvm2
I1008 14:22:14.746049  371479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35241
I1008 14:22:14.746613  371479 main.go:141] libmachine: () Calling .GetVersion
I1008 14:22:14.747232  371479 main.go:141] libmachine: Using API Version  1
I1008 14:22:14.747263  371479 main.go:141] libmachine: () Calling .SetConfigRaw
I1008 14:22:14.747716  371479 main.go:141] libmachine: () Calling .GetMachineName
I1008 14:22:14.747943  371479 main.go:141] libmachine: (functional-882741) Calling .DriverName
I1008 14:22:14.748195  371479 ssh_runner.go:195] Run: systemctl --version
I1008 14:22:14.748224  371479 main.go:141] libmachine: (functional-882741) Calling .GetSSHHostname
I1008 14:22:14.752318  371479 main.go:141] libmachine: (functional-882741) DBG | domain functional-882741 has defined MAC address 52:54:00:1e:2e:e9 in network mk-functional-882741
I1008 14:22:14.752851  371479 main.go:141] libmachine: (functional-882741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:2e:e9", ip: ""} in network mk-functional-882741: {Iface:virbr1 ExpiryTime:2025-10-08 15:18:51 +0000 UTC Type:0 Mac:52:54:00:1e:2e:e9 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:functional-882741 Clientid:01:52:54:00:1e:2e:e9}
I1008 14:22:14.752884  371479 main.go:141] libmachine: (functional-882741) DBG | domain functional-882741 has defined IP address 192.168.39.97 and MAC address 52:54:00:1e:2e:e9 in network mk-functional-882741
I1008 14:22:14.753063  371479 main.go:141] libmachine: (functional-882741) Calling .GetSSHPort
I1008 14:22:14.753257  371479 main.go:141] libmachine: (functional-882741) Calling .GetSSHKeyPath
I1008 14:22:14.753433  371479 main.go:141] libmachine: (functional-882741) Calling .GetSSHUsername
I1008 14:22:14.753636  371479 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/functional-882741/id_rsa Username:docker}
I1008 14:22:14.866844  371479 build_images.go:161] Building image from path: /tmp/build.275803227.tar
I1008 14:22:14.866943  371479 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1008 14:22:14.885315  371479 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.275803227.tar
I1008 14:22:14.891655  371479 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.275803227.tar: stat -c "%s %y" /var/lib/minikube/build/build.275803227.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.275803227.tar': No such file or directory
I1008 14:22:14.891702  371479 ssh_runner.go:362] scp /tmp/build.275803227.tar --> /var/lib/minikube/build/build.275803227.tar (3072 bytes)
I1008 14:22:14.929966  371479 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.275803227
I1008 14:22:14.952253  371479 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.275803227 -xf /var/lib/minikube/build/build.275803227.tar
I1008 14:22:14.967854  371479 crio.go:315] Building image: /var/lib/minikube/build/build.275803227
I1008 14:22:14.967929  371479 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-882741 /var/lib/minikube/build/build.275803227 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1008 14:22:18.460193  371479 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-882741 /var/lib/minikube/build/build.275803227 --cgroup-manager=cgroupfs: (3.492227788s)
I1008 14:22:18.460258  371479 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.275803227
I1008 14:22:18.480386  371479 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.275803227.tar
I1008 14:22:18.506842  371479 build_images.go:217] Built localhost/my-image:functional-882741 from /tmp/build.275803227.tar
I1008 14:22:18.506891  371479 build_images.go:133] succeeded building to: functional-882741
I1008 14:22:18.506896  371479 build_images.go:134] failed building to: 
I1008 14:22:18.506923  371479 main.go:141] libmachine: Making call to close driver server
I1008 14:22:18.506932  371479 main.go:141] libmachine: (functional-882741) Calling .Close
I1008 14:22:18.507282  371479 main.go:141] libmachine: (functional-882741) DBG | Closing plugin on server side
I1008 14:22:18.507319  371479 main.go:141] libmachine: Successfully made call to close driver server
I1008 14:22:18.507337  371479 main.go:141] libmachine: Making call to close connection to plugin binary
I1008 14:22:18.507369  371479 main.go:141] libmachine: Making call to close driver server
I1008 14:22:18.507384  371479 main.go:141] libmachine: (functional-882741) Calling .Close
I1008 14:22:18.507663  371479 main.go:141] libmachine: Successfully made call to close driver server
I1008 14:22:18.507686  371479 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.72756279s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-882741
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-882741 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-882741 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-zl98b" [189a49ff-2e97-46a0-9670-faece6f7d2d8] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-zl98b" [189a49ff-2e97-46a0-9670-faece6f7d2d8] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.004741048s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 image load --daemon kicbase/echo-server:functional-882741 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-882741 image load --daemon kicbase/echo-server:functional-882741 --alsologtostderr: (1.148277089s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 image load --daemon kicbase/echo-server:functional-882741 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-882741
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 image load --daemon kicbase/echo-server:functional-882741 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 image save kicbase/echo-server:functional-882741 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 image rm kicbase/echo-server:functional-882741 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-882741
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 image save --daemon kicbase/echo-server:functional-882741 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-882741
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 service list -o json
functional_test.go:1504: Took "294.681963ms" to run "out/minikube-linux-amd64 -p functional-882741 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.97:31834
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.97:31834
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (22.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-882741 /tmp/TestFunctionalparallelMountCmdany-port2775860841/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759933307091382003" to /tmp/TestFunctionalparallelMountCmdany-port2775860841/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759933307091382003" to /tmp/TestFunctionalparallelMountCmdany-port2775860841/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759933307091382003" to /tmp/TestFunctionalparallelMountCmdany-port2775860841/001/test-1759933307091382003
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882741 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (284.33675ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1008 14:21:47.376149  361915 retry.go:31] will retry after 356.676456ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  8 14:21 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  8 14:21 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  8 14:21 test-1759933307091382003
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 ssh cat /mount-9p/test-1759933307091382003
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-882741 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [6447c5d4-58ff-4b9d-a482-9c1377edb8de] Pending
helpers_test.go:352: "busybox-mount" [6447c5d4-58ff-4b9d-a482-9c1377edb8de] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [6447c5d4-58ff-4b9d-a482-9c1377edb8de] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [6447c5d4-58ff-4b9d-a482-9c1377edb8de] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 20.003882605s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-882741 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-882741 /tmp/TestFunctionalparallelMountCmdany-port2775860841/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (22.79s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "445.961287ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "66.712161ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "434.487953ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "64.75495ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-882741 /tmp/TestFunctionalparallelMountCmdspecific-port1620273708/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882741 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (332.353931ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1008 14:22:10.215784  361915 retry.go:31] will retry after 319.593915ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-882741 /tmp/TestFunctionalparallelMountCmdspecific-port1620273708/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882741 ssh "sudo umount -f /mount-9p": exit status 1 (229.071451ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-882741 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-882741 /tmp/TestFunctionalparallelMountCmdspecific-port1620273708/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-882741 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3605153669/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-882741 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3605153669/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-882741 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3605153669/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882741 ssh "findmnt -T" /mount1: exit status 1 (254.744587ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1008 14:22:11.982327  361915 retry.go:31] will retry after 569.720335ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-882741 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-882741 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-882741 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3605153669/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-882741 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3605153669/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-882741 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3605153669/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.67s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-882741
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-882741
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-882741
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (233.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1008 14:23:02.314533  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:23:30.025820  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-748586 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (3m52.276539904s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (233.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-748586 kubectl -- rollout status deployment/busybox: (4.639367809s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 kubectl -- exec busybox-7b57f96db7-pw5df -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 kubectl -- exec busybox-7b57f96db7-s6hgq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 kubectl -- exec busybox-7b57f96db7-v68xm -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 kubectl -- exec busybox-7b57f96db7-pw5df -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 kubectl -- exec busybox-7b57f96db7-s6hgq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 kubectl -- exec busybox-7b57f96db7-v68xm -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 kubectl -- exec busybox-7b57f96db7-pw5df -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 kubectl -- exec busybox-7b57f96db7-s6hgq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 kubectl -- exec busybox-7b57f96db7-v68xm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 kubectl -- exec busybox-7b57f96db7-pw5df -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 kubectl -- exec busybox-7b57f96db7-pw5df -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 kubectl -- exec busybox-7b57f96db7-s6hgq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 kubectl -- exec busybox-7b57f96db7-s6hgq -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 kubectl -- exec busybox-7b57f96db7-v68xm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 kubectl -- exec busybox-7b57f96db7-v68xm -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (46.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 node add --alsologtostderr -v 5
E1008 14:26:36.605627  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/functional-882741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:26:36.612082  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/functional-882741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:26:36.623532  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/functional-882741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:26:36.645024  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/functional-882741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:26:36.686494  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/functional-882741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:26:36.768019  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/functional-882741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:26:36.929927  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/functional-882741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:26:37.251705  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/functional-882741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:26:37.893633  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/functional-882741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:26:39.175182  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/functional-882741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:26:41.736534  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/functional-882741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:26:46.857901  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/functional-882741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:26:57.100042  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/functional-882741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-748586 node add --alsologtostderr -v 5: (45.446865036s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (46.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-748586 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 cp testdata/cp-test.txt ha-748586:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 ssh -n ha-748586 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 cp ha-748586:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile193512915/001/cp-test_ha-748586.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 ssh -n ha-748586 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 cp ha-748586:/home/docker/cp-test.txt ha-748586-m02:/home/docker/cp-test_ha-748586_ha-748586-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 ssh -n ha-748586 "sudo cat /home/docker/cp-test.txt"
E1008 14:27:17.582387  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/functional-882741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 ssh -n ha-748586-m02 "sudo cat /home/docker/cp-test_ha-748586_ha-748586-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 cp ha-748586:/home/docker/cp-test.txt ha-748586-m03:/home/docker/cp-test_ha-748586_ha-748586-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 ssh -n ha-748586 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 ssh -n ha-748586-m03 "sudo cat /home/docker/cp-test_ha-748586_ha-748586-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 cp ha-748586:/home/docker/cp-test.txt ha-748586-m04:/home/docker/cp-test_ha-748586_ha-748586-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 ssh -n ha-748586 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 ssh -n ha-748586-m04 "sudo cat /home/docker/cp-test_ha-748586_ha-748586-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 cp testdata/cp-test.txt ha-748586-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 ssh -n ha-748586-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 cp ha-748586-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile193512915/001/cp-test_ha-748586-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 ssh -n ha-748586-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 cp ha-748586-m02:/home/docker/cp-test.txt ha-748586:/home/docker/cp-test_ha-748586-m02_ha-748586.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 ssh -n ha-748586-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 ssh -n ha-748586 "sudo cat /home/docker/cp-test_ha-748586-m02_ha-748586.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 cp ha-748586-m02:/home/docker/cp-test.txt ha-748586-m03:/home/docker/cp-test_ha-748586-m02_ha-748586-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 ssh -n ha-748586-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 ssh -n ha-748586-m03 "sudo cat /home/docker/cp-test_ha-748586-m02_ha-748586-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 cp ha-748586-m02:/home/docker/cp-test.txt ha-748586-m04:/home/docker/cp-test_ha-748586-m02_ha-748586-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 ssh -n ha-748586-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 ssh -n ha-748586-m04 "sudo cat /home/docker/cp-test_ha-748586-m02_ha-748586-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 cp testdata/cp-test.txt ha-748586-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 ssh -n ha-748586-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 cp ha-748586-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile193512915/001/cp-test_ha-748586-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 ssh -n ha-748586-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 cp ha-748586-m03:/home/docker/cp-test.txt ha-748586:/home/docker/cp-test_ha-748586-m03_ha-748586.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 ssh -n ha-748586-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 ssh -n ha-748586 "sudo cat /home/docker/cp-test_ha-748586-m03_ha-748586.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 cp ha-748586-m03:/home/docker/cp-test.txt ha-748586-m02:/home/docker/cp-test_ha-748586-m03_ha-748586-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 ssh -n ha-748586-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 ssh -n ha-748586-m02 "sudo cat /home/docker/cp-test_ha-748586-m03_ha-748586-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 cp ha-748586-m03:/home/docker/cp-test.txt ha-748586-m04:/home/docker/cp-test_ha-748586-m03_ha-748586-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 ssh -n ha-748586-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 ssh -n ha-748586-m04 "sudo cat /home/docker/cp-test_ha-748586-m03_ha-748586-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 cp testdata/cp-test.txt ha-748586-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 ssh -n ha-748586-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 cp ha-748586-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile193512915/001/cp-test_ha-748586-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 ssh -n ha-748586-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 cp ha-748586-m04:/home/docker/cp-test.txt ha-748586:/home/docker/cp-test_ha-748586-m04_ha-748586.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 ssh -n ha-748586-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 ssh -n ha-748586 "sudo cat /home/docker/cp-test_ha-748586-m04_ha-748586.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 cp ha-748586-m04:/home/docker/cp-test.txt ha-748586-m02:/home/docker/cp-test_ha-748586-m04_ha-748586-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 ssh -n ha-748586-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 ssh -n ha-748586-m02 "sudo cat /home/docker/cp-test_ha-748586-m04_ha-748586-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 cp ha-748586-m04:/home/docker/cp-test.txt ha-748586-m03:/home/docker/cp-test_ha-748586-m04_ha-748586-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 ssh -n ha-748586-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 ssh -n ha-748586-m03 "sudo cat /home/docker/cp-test_ha-748586-m04_ha-748586-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (82.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 node stop m02 --alsologtostderr -v 5
E1008 14:27:58.544885  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/functional-882741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:28:02.309615  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-748586 node stop m02 --alsologtostderr -v 5: (1m21.699930949s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-748586 status --alsologtostderr -v 5: exit status 7 (681.334501ms)

                                                
                                                
-- stdout --
	ha-748586
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-748586-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-748586-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-748586-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 14:28:50.543550  376350 out.go:360] Setting OutFile to fd 1 ...
	I1008 14:28:50.543994  376350 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:28:50.544006  376350 out.go:374] Setting ErrFile to fd 2...
	I1008 14:28:50.544012  376350 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:28:50.544231  376350 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-357044/.minikube/bin
	I1008 14:28:50.544491  376350 out.go:368] Setting JSON to false
	I1008 14:28:50.544535  376350 mustload.go:65] Loading cluster: ha-748586
	I1008 14:28:50.544627  376350 notify.go:220] Checking for updates...
	I1008 14:28:50.545004  376350 config.go:182] Loaded profile config "ha-748586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 14:28:50.545023  376350 status.go:174] checking status of ha-748586 ...
	I1008 14:28:50.545534  376350 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:28:50.545585  376350 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:28:50.565415  376350 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45115
	I1008 14:28:50.566016  376350 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:28:50.566704  376350 main.go:141] libmachine: Using API Version  1
	I1008 14:28:50.566734  376350 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:28:50.567112  376350 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:28:50.567328  376350 main.go:141] libmachine: (ha-748586) Calling .GetState
	I1008 14:28:50.569363  376350 status.go:371] ha-748586 host status = "Running" (err=<nil>)
	I1008 14:28:50.569382  376350 host.go:66] Checking if "ha-748586" exists ...
	I1008 14:28:50.569702  376350 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:28:50.569746  376350 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:28:50.584364  376350 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42351
	I1008 14:28:50.584861  376350 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:28:50.585267  376350 main.go:141] libmachine: Using API Version  1
	I1008 14:28:50.585290  376350 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:28:50.585726  376350 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:28:50.585924  376350 main.go:141] libmachine: (ha-748586) Calling .GetIP
	I1008 14:28:50.589274  376350 main.go:141] libmachine: (ha-748586) DBG | domain ha-748586 has defined MAC address 52:54:00:0f:9c:9c in network mk-ha-748586
	I1008 14:28:50.589951  376350 main.go:141] libmachine: (ha-748586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9c:9c", ip: ""} in network mk-ha-748586: {Iface:virbr1 ExpiryTime:2025-10-08 15:22:42 +0000 UTC Type:0 Mac:52:54:00:0f:9c:9c Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-748586 Clientid:01:52:54:00:0f:9c:9c}
	I1008 14:28:50.589996  376350 main.go:141] libmachine: (ha-748586) DBG | domain ha-748586 has defined IP address 192.168.39.79 and MAC address 52:54:00:0f:9c:9c in network mk-ha-748586
	I1008 14:28:50.590180  376350 host.go:66] Checking if "ha-748586" exists ...
	I1008 14:28:50.590648  376350 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:28:50.590698  376350 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:28:50.606023  376350 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35787
	I1008 14:28:50.606577  376350 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:28:50.607096  376350 main.go:141] libmachine: Using API Version  1
	I1008 14:28:50.607116  376350 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:28:50.607462  376350 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:28:50.607693  376350 main.go:141] libmachine: (ha-748586) Calling .DriverName
	I1008 14:28:50.607881  376350 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 14:28:50.607912  376350 main.go:141] libmachine: (ha-748586) Calling .GetSSHHostname
	I1008 14:28:50.611423  376350 main.go:141] libmachine: (ha-748586) DBG | domain ha-748586 has defined MAC address 52:54:00:0f:9c:9c in network mk-ha-748586
	I1008 14:28:50.611930  376350 main.go:141] libmachine: (ha-748586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9c:9c", ip: ""} in network mk-ha-748586: {Iface:virbr1 ExpiryTime:2025-10-08 15:22:42 +0000 UTC Type:0 Mac:52:54:00:0f:9c:9c Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-748586 Clientid:01:52:54:00:0f:9c:9c}
	I1008 14:28:50.611973  376350 main.go:141] libmachine: (ha-748586) DBG | domain ha-748586 has defined IP address 192.168.39.79 and MAC address 52:54:00:0f:9c:9c in network mk-ha-748586
	I1008 14:28:50.612114  376350 main.go:141] libmachine: (ha-748586) Calling .GetSSHPort
	I1008 14:28:50.612333  376350 main.go:141] libmachine: (ha-748586) Calling .GetSSHKeyPath
	I1008 14:28:50.612514  376350 main.go:141] libmachine: (ha-748586) Calling .GetSSHUsername
	I1008 14:28:50.612726  376350 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/ha-748586/id_rsa Username:docker}
	I1008 14:28:50.700158  376350 ssh_runner.go:195] Run: systemctl --version
	I1008 14:28:50.707203  376350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 14:28:50.727131  376350 kubeconfig.go:125] found "ha-748586" server: "https://192.168.39.254:8443"
	I1008 14:28:50.727184  376350 api_server.go:166] Checking apiserver status ...
	I1008 14:28:50.727221  376350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:28:50.748207  376350 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1422/cgroup
	W1008 14:28:50.759668  376350 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1422/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1008 14:28:50.759741  376350 ssh_runner.go:195] Run: ls
	I1008 14:28:50.766274  376350 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1008 14:28:50.772530  376350 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1008 14:28:50.772560  376350 status.go:463] ha-748586 apiserver status = Running (err=<nil>)
	I1008 14:28:50.772571  376350 status.go:176] ha-748586 status: &{Name:ha-748586 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1008 14:28:50.772600  376350 status.go:174] checking status of ha-748586-m02 ...
	I1008 14:28:50.772896  376350 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:28:50.772934  376350 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:28:50.786948  376350 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37031
	I1008 14:28:50.787492  376350 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:28:50.788031  376350 main.go:141] libmachine: Using API Version  1
	I1008 14:28:50.788055  376350 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:28:50.788476  376350 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:28:50.788744  376350 main.go:141] libmachine: (ha-748586-m02) Calling .GetState
	I1008 14:28:50.790659  376350 status.go:371] ha-748586-m02 host status = "Stopped" (err=<nil>)
	I1008 14:28:50.790691  376350 status.go:384] host is not running, skipping remaining checks
	I1008 14:28:50.790700  376350 status.go:176] ha-748586-m02 status: &{Name:ha-748586-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1008 14:28:50.790721  376350 status.go:174] checking status of ha-748586-m03 ...
	I1008 14:28:50.791070  376350 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:28:50.791117  376350 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:28:50.806113  376350 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33975
	I1008 14:28:50.806675  376350 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:28:50.807240  376350 main.go:141] libmachine: Using API Version  1
	I1008 14:28:50.807265  376350 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:28:50.807701  376350 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:28:50.807977  376350 main.go:141] libmachine: (ha-748586-m03) Calling .GetState
	I1008 14:28:50.809978  376350 status.go:371] ha-748586-m03 host status = "Running" (err=<nil>)
	I1008 14:28:50.809998  376350 host.go:66] Checking if "ha-748586-m03" exists ...
	I1008 14:28:50.810374  376350 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:28:50.810437  376350 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:28:50.825410  376350 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39801
	I1008 14:28:50.825872  376350 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:28:50.826331  376350 main.go:141] libmachine: Using API Version  1
	I1008 14:28:50.826369  376350 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:28:50.826784  376350 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:28:50.826987  376350 main.go:141] libmachine: (ha-748586-m03) Calling .GetIP
	I1008 14:28:50.830304  376350 main.go:141] libmachine: (ha-748586-m03) DBG | domain ha-748586-m03 has defined MAC address 52:54:00:aa:d4:39 in network mk-ha-748586
	I1008 14:28:50.830979  376350 main.go:141] libmachine: (ha-748586-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:d4:39", ip: ""} in network mk-ha-748586: {Iface:virbr1 ExpiryTime:2025-10-08 15:24:44 +0000 UTC Type:0 Mac:52:54:00:aa:d4:39 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-748586-m03 Clientid:01:52:54:00:aa:d4:39}
	I1008 14:28:50.831003  376350 main.go:141] libmachine: (ha-748586-m03) DBG | domain ha-748586-m03 has defined IP address 192.168.39.183 and MAC address 52:54:00:aa:d4:39 in network mk-ha-748586
	I1008 14:28:50.831285  376350 host.go:66] Checking if "ha-748586-m03" exists ...
	I1008 14:28:50.831666  376350 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:28:50.831724  376350 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:28:50.846037  376350 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43975
	I1008 14:28:50.846747  376350 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:28:50.847271  376350 main.go:141] libmachine: Using API Version  1
	I1008 14:28:50.847292  376350 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:28:50.847657  376350 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:28:50.847874  376350 main.go:141] libmachine: (ha-748586-m03) Calling .DriverName
	I1008 14:28:50.848074  376350 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 14:28:50.848095  376350 main.go:141] libmachine: (ha-748586-m03) Calling .GetSSHHostname
	I1008 14:28:50.851269  376350 main.go:141] libmachine: (ha-748586-m03) DBG | domain ha-748586-m03 has defined MAC address 52:54:00:aa:d4:39 in network mk-ha-748586
	I1008 14:28:50.851791  376350 main.go:141] libmachine: (ha-748586-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:d4:39", ip: ""} in network mk-ha-748586: {Iface:virbr1 ExpiryTime:2025-10-08 15:24:44 +0000 UTC Type:0 Mac:52:54:00:aa:d4:39 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-748586-m03 Clientid:01:52:54:00:aa:d4:39}
	I1008 14:28:50.851822  376350 main.go:141] libmachine: (ha-748586-m03) DBG | domain ha-748586-m03 has defined IP address 192.168.39.183 and MAC address 52:54:00:aa:d4:39 in network mk-ha-748586
	I1008 14:28:50.851994  376350 main.go:141] libmachine: (ha-748586-m03) Calling .GetSSHPort
	I1008 14:28:50.852175  376350 main.go:141] libmachine: (ha-748586-m03) Calling .GetSSHKeyPath
	I1008 14:28:50.852340  376350 main.go:141] libmachine: (ha-748586-m03) Calling .GetSSHUsername
	I1008 14:28:50.852486  376350 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/ha-748586-m03/id_rsa Username:docker}
	I1008 14:28:50.936165  376350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 14:28:50.957174  376350 kubeconfig.go:125] found "ha-748586" server: "https://192.168.39.254:8443"
	I1008 14:28:50.957212  376350 api_server.go:166] Checking apiserver status ...
	I1008 14:28:50.957253  376350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:28:50.980280  376350 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1756/cgroup
	W1008 14:28:50.995506  376350 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1756/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1008 14:28:50.995569  376350 ssh_runner.go:195] Run: ls
	I1008 14:28:51.001311  376350 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1008 14:28:51.006676  376350 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1008 14:28:51.006706  376350 status.go:463] ha-748586-m03 apiserver status = Running (err=<nil>)
	I1008 14:28:51.006714  376350 status.go:176] ha-748586-m03 status: &{Name:ha-748586-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1008 14:28:51.006731  376350 status.go:174] checking status of ha-748586-m04 ...
	I1008 14:28:51.007086  376350 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:28:51.007125  376350 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:28:51.021428  376350 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34993
	I1008 14:28:51.022016  376350 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:28:51.022561  376350 main.go:141] libmachine: Using API Version  1
	I1008 14:28:51.022593  376350 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:28:51.023018  376350 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:28:51.023271  376350 main.go:141] libmachine: (ha-748586-m04) Calling .GetState
	I1008 14:28:51.025379  376350 status.go:371] ha-748586-m04 host status = "Running" (err=<nil>)
	I1008 14:28:51.025404  376350 host.go:66] Checking if "ha-748586-m04" exists ...
	I1008 14:28:51.025896  376350 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:28:51.025947  376350 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:28:51.040180  376350 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46117
	I1008 14:28:51.040674  376350 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:28:51.041188  376350 main.go:141] libmachine: Using API Version  1
	I1008 14:28:51.041216  376350 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:28:51.041617  376350 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:28:51.041816  376350 main.go:141] libmachine: (ha-748586-m04) Calling .GetIP
	I1008 14:28:51.045343  376350 main.go:141] libmachine: (ha-748586-m04) DBG | domain ha-748586-m04 has defined MAC address 52:54:00:88:6a:b7 in network mk-ha-748586
	I1008 14:28:51.045978  376350 main.go:141] libmachine: (ha-748586-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:6a:b7", ip: ""} in network mk-ha-748586: {Iface:virbr1 ExpiryTime:2025-10-08 15:26:44 +0000 UTC Type:0 Mac:52:54:00:88:6a:b7 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-748586-m04 Clientid:01:52:54:00:88:6a:b7}
	I1008 14:28:51.046007  376350 main.go:141] libmachine: (ha-748586-m04) DBG | domain ha-748586-m04 has defined IP address 192.168.39.125 and MAC address 52:54:00:88:6a:b7 in network mk-ha-748586
	I1008 14:28:51.046212  376350 host.go:66] Checking if "ha-748586-m04" exists ...
	I1008 14:28:51.046615  376350 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:28:51.046677  376350 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:28:51.061205  376350 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41161
	I1008 14:28:51.061729  376350 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:28:51.062235  376350 main.go:141] libmachine: Using API Version  1
	I1008 14:28:51.062257  376350 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:28:51.062767  376350 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:28:51.062985  376350 main.go:141] libmachine: (ha-748586-m04) Calling .DriverName
	I1008 14:28:51.063187  376350 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 14:28:51.063208  376350 main.go:141] libmachine: (ha-748586-m04) Calling .GetSSHHostname
	I1008 14:28:51.066972  376350 main.go:141] libmachine: (ha-748586-m04) DBG | domain ha-748586-m04 has defined MAC address 52:54:00:88:6a:b7 in network mk-ha-748586
	I1008 14:28:51.067458  376350 main.go:141] libmachine: (ha-748586-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:6a:b7", ip: ""} in network mk-ha-748586: {Iface:virbr1 ExpiryTime:2025-10-08 15:26:44 +0000 UTC Type:0 Mac:52:54:00:88:6a:b7 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-748586-m04 Clientid:01:52:54:00:88:6a:b7}
	I1008 14:28:51.067504  376350 main.go:141] libmachine: (ha-748586-m04) DBG | domain ha-748586-m04 has defined IP address 192.168.39.125 and MAC address 52:54:00:88:6a:b7 in network mk-ha-748586
	I1008 14:28:51.067728  376350 main.go:141] libmachine: (ha-748586-m04) Calling .GetSSHPort
	I1008 14:28:51.067949  376350 main.go:141] libmachine: (ha-748586-m04) Calling .GetSSHKeyPath
	I1008 14:28:51.068101  376350 main.go:141] libmachine: (ha-748586-m04) Calling .GetSSHUsername
	I1008 14:28:51.068298  376350 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/ha-748586-m04/id_rsa Username:docker}
	I1008 14:28:51.149826  376350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 14:28:51.171448  376350 status.go:176] ha-748586-m04 status: &{Name:ha-748586-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (82.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (36.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 node start m02 --alsologtostderr -v 5
E1008 14:29:20.466536  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/functional-882741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-748586 node start m02 --alsologtostderr -v 5: (35.3086788s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-748586 status --alsologtostderr -v 5: (1.046355655s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (36.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.199448548s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (389.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 stop --alsologtostderr -v 5
E1008 14:31:36.605994  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/functional-882741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:32:04.308814  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/functional-882741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:33:02.313939  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-748586 stop --alsologtostderr -v 5: (4m17.617027171s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 start --wait true --alsologtostderr -v 5
E1008 14:34:25.389796  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-748586 start --wait true --alsologtostderr -v 5: (2m11.997517733s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (389.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-748586 node delete m03 --alsologtostderr -v 5: (17.789004863s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (234.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 stop --alsologtostderr -v 5
E1008 14:36:36.607605  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/functional-882741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:38:02.310581  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-748586 stop --alsologtostderr -v 5: (3m54.879058089s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-748586 status --alsologtostderr -v 5: exit status 7 (114.814439ms)

                                                
                                                
-- stdout --
	ha-748586
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-748586-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-748586-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 14:40:13.542311  380216 out.go:360] Setting OutFile to fd 1 ...
	I1008 14:40:13.542635  380216 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:40:13.542646  380216 out.go:374] Setting ErrFile to fd 2...
	I1008 14:40:13.542653  380216 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:40:13.542913  380216 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-357044/.minikube/bin
	I1008 14:40:13.543127  380216 out.go:368] Setting JSON to false
	I1008 14:40:13.543164  380216 mustload.go:65] Loading cluster: ha-748586
	I1008 14:40:13.543293  380216 notify.go:220] Checking for updates...
	I1008 14:40:13.543655  380216 config.go:182] Loaded profile config "ha-748586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 14:40:13.543679  380216 status.go:174] checking status of ha-748586 ...
	I1008 14:40:13.544168  380216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:40:13.544238  380216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:40:13.564165  380216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44383
	I1008 14:40:13.564753  380216 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:40:13.565384  380216 main.go:141] libmachine: Using API Version  1
	I1008 14:40:13.565409  380216 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:40:13.565825  380216 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:40:13.566034  380216 main.go:141] libmachine: (ha-748586) Calling .GetState
	I1008 14:40:13.567939  380216 status.go:371] ha-748586 host status = "Stopped" (err=<nil>)
	I1008 14:40:13.567958  380216 status.go:384] host is not running, skipping remaining checks
	I1008 14:40:13.567965  380216 status.go:176] ha-748586 status: &{Name:ha-748586 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1008 14:40:13.567994  380216 status.go:174] checking status of ha-748586-m02 ...
	I1008 14:40:13.568291  380216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:40:13.568332  380216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:40:13.582374  380216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42089
	I1008 14:40:13.582924  380216 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:40:13.583481  380216 main.go:141] libmachine: Using API Version  1
	I1008 14:40:13.583506  380216 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:40:13.583871  380216 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:40:13.584054  380216 main.go:141] libmachine: (ha-748586-m02) Calling .GetState
	I1008 14:40:13.586238  380216 status.go:371] ha-748586-m02 host status = "Stopped" (err=<nil>)
	I1008 14:40:13.586255  380216 status.go:384] host is not running, skipping remaining checks
	I1008 14:40:13.586260  380216 status.go:176] ha-748586-m02 status: &{Name:ha-748586-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1008 14:40:13.586279  380216 status.go:174] checking status of ha-748586-m04 ...
	I1008 14:40:13.586618  380216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:40:13.586667  380216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:40:13.600772  380216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44535
	I1008 14:40:13.601298  380216 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:40:13.601826  380216 main.go:141] libmachine: Using API Version  1
	I1008 14:40:13.601855  380216 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:40:13.602211  380216 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:40:13.602465  380216 main.go:141] libmachine: (ha-748586-m04) Calling .GetState
	I1008 14:40:13.604274  380216 status.go:371] ha-748586-m04 host status = "Stopped" (err=<nil>)
	I1008 14:40:13.604292  380216 status.go:384] host is not running, skipping remaining checks
	I1008 14:40:13.604299  380216 status.go:176] ha-748586-m04 status: &{Name:ha-748586-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (234.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (93.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1008 14:41:36.606486  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/functional-882741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-748586 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m32.313964394s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (93.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (70.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-748586 node add --control-plane --alsologtostderr -v 5: (1m10.038501181s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-748586 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (70.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                    
x
+
TestJSONOutput/start/Command (79.97s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-877151 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-877151 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m19.967583909s)
--- PASS: TestJSONOutput/start/Command (79.97s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.8s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-877151 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.80s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-877151 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.96s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-877151 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-877151 --output=json --user=testUser: (6.960969385s)
--- PASS: TestJSONOutput/stop/Command (6.96s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-917417 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-917417 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (72.08998ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"72c00510-2abc-4665-bd36-6b3fb073c4f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-917417] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f6caa425-cfbc-4af5-b44c-c429c318d821","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21681"}}
	{"specversion":"1.0","id":"6d9053cb-6fb9-474e-9801-8da76c3ceb43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4f518729-277c-4b0e-9b6f-ff814bbaea25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21681-357044/kubeconfig"}}
	{"specversion":"1.0","id":"f2539714-2a0c-4da7-a022-7678bed7548c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-357044/.minikube"}}
	{"specversion":"1.0","id":"1c24d11b-5d38-40b2-aa6d-6b675cdb40e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"bfedc393-d853-41d2-b07c-cbfa648556d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"eb534dab-2c99-4410-b740-595f96c0eaf9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-917417" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-917417
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (79.6s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-818600 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-818600 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (37.740918736s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-825873 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-825873 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (38.977531811s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-818600
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-825873
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-825873" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-825873
helpers_test.go:175: Cleaning up "first-818600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-818600
--- PASS: TestMinikubeProfile (79.60s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (21.49s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-552056 --memory=3072 --mount-string /tmp/TestMountStartserial3423759325/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-552056 --memory=3072 --mount-string /tmp/TestMountStartserial3423759325/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (20.491912677s)
--- PASS: TestMountStart/serial/StartWithMountFirst (21.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-552056 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-552056 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (24.88s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-574168 --memory=3072 --mount-string /tmp/TestMountStartserial3423759325/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1008 14:46:36.609155  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/functional-882741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-574168 --memory=3072 --mount-string /tmp/TestMountStartserial3423759325/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (23.876761154s)
--- PASS: TestMountStart/serial/StartWithMountSecond (24.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-574168 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-574168 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-552056 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-574168 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-574168 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-574168
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-574168: (1.262110578s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (19.47s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-574168
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-574168: (18.472299919s)
--- PASS: TestMountStart/serial/RestartStopped (19.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-574168 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-574168 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (131.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-454917 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1008 14:48:02.309984  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-454917 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (2m10.790535989s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (131.23s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-454917 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-454917 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-454917 -- rollout status deployment/busybox: (4.847224107s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-454917 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-454917 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-454917 -- exec busybox-7b57f96db7-2jsd4 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-454917 -- exec busybox-7b57f96db7-ggg9c -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-454917 -- exec busybox-7b57f96db7-2jsd4 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-454917 -- exec busybox-7b57f96db7-ggg9c -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-454917 -- exec busybox-7b57f96db7-2jsd4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-454917 -- exec busybox-7b57f96db7-ggg9c -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.41s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-454917 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-454917 -- exec busybox-7b57f96db7-2jsd4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-454917 -- exec busybox-7b57f96db7-2jsd4 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-454917 -- exec busybox-7b57f96db7-ggg9c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-454917 -- exec busybox-7b57f96db7-ggg9c -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-454917 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-454917 -v=5 --alsologtostderr: (41.432453199s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.06s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-454917 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.61s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 cp testdata/cp-test.txt multinode-454917:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 ssh -n multinode-454917 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 cp multinode-454917:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2693675063/001/cp-test_multinode-454917.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 ssh -n multinode-454917 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 cp multinode-454917:/home/docker/cp-test.txt multinode-454917-m02:/home/docker/cp-test_multinode-454917_multinode-454917-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 ssh -n multinode-454917 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 ssh -n multinode-454917-m02 "sudo cat /home/docker/cp-test_multinode-454917_multinode-454917-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 cp multinode-454917:/home/docker/cp-test.txt multinode-454917-m03:/home/docker/cp-test_multinode-454917_multinode-454917-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 ssh -n multinode-454917 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 ssh -n multinode-454917-m03 "sudo cat /home/docker/cp-test_multinode-454917_multinode-454917-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 cp testdata/cp-test.txt multinode-454917-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 ssh -n multinode-454917-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 cp multinode-454917-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2693675063/001/cp-test_multinode-454917-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 ssh -n multinode-454917-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 cp multinode-454917-m02:/home/docker/cp-test.txt multinode-454917:/home/docker/cp-test_multinode-454917-m02_multinode-454917.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 ssh -n multinode-454917-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 ssh -n multinode-454917 "sudo cat /home/docker/cp-test_multinode-454917-m02_multinode-454917.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 cp multinode-454917-m02:/home/docker/cp-test.txt multinode-454917-m03:/home/docker/cp-test_multinode-454917-m02_multinode-454917-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 ssh -n multinode-454917-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 ssh -n multinode-454917-m03 "sudo cat /home/docker/cp-test_multinode-454917-m02_multinode-454917-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 cp testdata/cp-test.txt multinode-454917-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 ssh -n multinode-454917-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 cp multinode-454917-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2693675063/001/cp-test_multinode-454917-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 ssh -n multinode-454917-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 cp multinode-454917-m03:/home/docker/cp-test.txt multinode-454917:/home/docker/cp-test_multinode-454917-m03_multinode-454917.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 ssh -n multinode-454917-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 ssh -n multinode-454917 "sudo cat /home/docker/cp-test_multinode-454917-m03_multinode-454917.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 cp multinode-454917-m03:/home/docker/cp-test.txt multinode-454917-m02:/home/docker/cp-test_multinode-454917-m03_multinode-454917-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 ssh -n multinode-454917-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 ssh -n multinode-454917-m02 "sudo cat /home/docker/cp-test_multinode-454917-m03_multinode-454917-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.59s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-454917 node stop m03: (1.689265627s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-454917 status: exit status 7 (448.633532ms)

                                                
                                                
-- stdout --
	multinode-454917
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-454917-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-454917-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-454917 status --alsologtostderr: exit status 7 (447.09834ms)

                                                
                                                
-- stdout --
	multinode-454917
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-454917-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-454917-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 14:50:12.847791  387929 out.go:360] Setting OutFile to fd 1 ...
	I1008 14:50:12.848114  387929 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:50:12.848126  387929 out.go:374] Setting ErrFile to fd 2...
	I1008 14:50:12.848130  387929 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:50:12.848330  387929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-357044/.minikube/bin
	I1008 14:50:12.848546  387929 out.go:368] Setting JSON to false
	I1008 14:50:12.848580  387929 mustload.go:65] Loading cluster: multinode-454917
	I1008 14:50:12.848662  387929 notify.go:220] Checking for updates...
	I1008 14:50:12.849106  387929 config.go:182] Loaded profile config "multinode-454917": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 14:50:12.849127  387929 status.go:174] checking status of multinode-454917 ...
	I1008 14:50:12.849702  387929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:50:12.849737  387929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:50:12.867716  387929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41565
	I1008 14:50:12.868255  387929 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:50:12.868967  387929 main.go:141] libmachine: Using API Version  1
	I1008 14:50:12.869006  387929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:50:12.869417  387929 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:50:12.869670  387929 main.go:141] libmachine: (multinode-454917) Calling .GetState
	I1008 14:50:12.871671  387929 status.go:371] multinode-454917 host status = "Running" (err=<nil>)
	I1008 14:50:12.871689  387929 host.go:66] Checking if "multinode-454917" exists ...
	I1008 14:50:12.872024  387929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:50:12.872086  387929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:50:12.886046  387929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38451
	I1008 14:50:12.886514  387929 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:50:12.887014  387929 main.go:141] libmachine: Using API Version  1
	I1008 14:50:12.887056  387929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:50:12.887447  387929 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:50:12.887671  387929 main.go:141] libmachine: (multinode-454917) Calling .GetIP
	I1008 14:50:12.891156  387929 main.go:141] libmachine: (multinode-454917) DBG | domain multinode-454917 has defined MAC address 52:54:00:0e:84:ce in network mk-multinode-454917
	I1008 14:50:12.891673  387929 main.go:141] libmachine: (multinode-454917) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:84:ce", ip: ""} in network mk-multinode-454917: {Iface:virbr1 ExpiryTime:2025-10-08 15:47:16 +0000 UTC Type:0 Mac:52:54:00:0e:84:ce Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-454917 Clientid:01:52:54:00:0e:84:ce}
	I1008 14:50:12.891700  387929 main.go:141] libmachine: (multinode-454917) DBG | domain multinode-454917 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:84:ce in network mk-multinode-454917
	I1008 14:50:12.891860  387929 host.go:66] Checking if "multinode-454917" exists ...
	I1008 14:50:12.892194  387929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:50:12.892237  387929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:50:12.907398  387929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40379
	I1008 14:50:12.907846  387929 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:50:12.908333  387929 main.go:141] libmachine: Using API Version  1
	I1008 14:50:12.908378  387929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:50:12.908741  387929 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:50:12.908955  387929 main.go:141] libmachine: (multinode-454917) Calling .DriverName
	I1008 14:50:12.909158  387929 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 14:50:12.909180  387929 main.go:141] libmachine: (multinode-454917) Calling .GetSSHHostname
	I1008 14:50:12.912627  387929 main.go:141] libmachine: (multinode-454917) DBG | domain multinode-454917 has defined MAC address 52:54:00:0e:84:ce in network mk-multinode-454917
	I1008 14:50:12.913139  387929 main.go:141] libmachine: (multinode-454917) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:84:ce", ip: ""} in network mk-multinode-454917: {Iface:virbr1 ExpiryTime:2025-10-08 15:47:16 +0000 UTC Type:0 Mac:52:54:00:0e:84:ce Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-454917 Clientid:01:52:54:00:0e:84:ce}
	I1008 14:50:12.913166  387929 main.go:141] libmachine: (multinode-454917) DBG | domain multinode-454917 has defined IP address 192.168.39.60 and MAC address 52:54:00:0e:84:ce in network mk-multinode-454917
	I1008 14:50:12.913415  387929 main.go:141] libmachine: (multinode-454917) Calling .GetSSHPort
	I1008 14:50:12.913602  387929 main.go:141] libmachine: (multinode-454917) Calling .GetSSHKeyPath
	I1008 14:50:12.913765  387929 main.go:141] libmachine: (multinode-454917) Calling .GetSSHUsername
	I1008 14:50:12.913931  387929 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/multinode-454917/id_rsa Username:docker}
	I1008 14:50:12.994999  387929 ssh_runner.go:195] Run: systemctl --version
	I1008 14:50:13.001590  387929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 14:50:13.019024  387929 kubeconfig.go:125] found "multinode-454917" server: "https://192.168.39.60:8443"
	I1008 14:50:13.019079  387929 api_server.go:166] Checking apiserver status ...
	I1008 14:50:13.019127  387929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 14:50:13.041402  387929 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1366/cgroup
	W1008 14:50:13.053223  387929 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1366/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1008 14:50:13.053301  387929 ssh_runner.go:195] Run: ls
	I1008 14:50:13.058600  387929 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I1008 14:50:13.064475  387929 api_server.go:279] https://192.168.39.60:8443/healthz returned 200:
	ok
	I1008 14:50:13.064502  387929 status.go:463] multinode-454917 apiserver status = Running (err=<nil>)
	I1008 14:50:13.064511  387929 status.go:176] multinode-454917 status: &{Name:multinode-454917 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1008 14:50:13.064526  387929 status.go:174] checking status of multinode-454917-m02 ...
	I1008 14:50:13.064824  387929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:50:13.064862  387929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:50:13.079245  387929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40579
	I1008 14:50:13.079760  387929 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:50:13.080186  387929 main.go:141] libmachine: Using API Version  1
	I1008 14:50:13.080206  387929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:50:13.080588  387929 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:50:13.080826  387929 main.go:141] libmachine: (multinode-454917-m02) Calling .GetState
	I1008 14:50:13.082874  387929 status.go:371] multinode-454917-m02 host status = "Running" (err=<nil>)
	I1008 14:50:13.082896  387929 host.go:66] Checking if "multinode-454917-m02" exists ...
	I1008 14:50:13.083378  387929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:50:13.083429  387929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:50:13.097886  387929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41507
	I1008 14:50:13.098344  387929 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:50:13.098889  387929 main.go:141] libmachine: Using API Version  1
	I1008 14:50:13.098911  387929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:50:13.099267  387929 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:50:13.099473  387929 main.go:141] libmachine: (multinode-454917-m02) Calling .GetIP
	I1008 14:50:13.102791  387929 main.go:141] libmachine: (multinode-454917-m02) DBG | domain multinode-454917-m02 has defined MAC address 52:54:00:0a:13:30 in network mk-multinode-454917
	I1008 14:50:13.103394  387929 main.go:141] libmachine: (multinode-454917-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:13:30", ip: ""} in network mk-multinode-454917: {Iface:virbr1 ExpiryTime:2025-10-08 15:48:44 +0000 UTC Type:0 Mac:52:54:00:0a:13:30 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:multinode-454917-m02 Clientid:01:52:54:00:0a:13:30}
	I1008 14:50:13.103435  387929 main.go:141] libmachine: (multinode-454917-m02) DBG | domain multinode-454917-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:0a:13:30 in network mk-multinode-454917
	I1008 14:50:13.103637  387929 host.go:66] Checking if "multinode-454917-m02" exists ...
	I1008 14:50:13.103982  387929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:50:13.104033  387929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:50:13.118712  387929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43905
	I1008 14:50:13.119236  387929 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:50:13.119742  387929 main.go:141] libmachine: Using API Version  1
	I1008 14:50:13.119769  387929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:50:13.120149  387929 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:50:13.120349  387929 main.go:141] libmachine: (multinode-454917-m02) Calling .DriverName
	I1008 14:50:13.120554  387929 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 14:50:13.120581  387929 main.go:141] libmachine: (multinode-454917-m02) Calling .GetSSHHostname
	I1008 14:50:13.124189  387929 main.go:141] libmachine: (multinode-454917-m02) DBG | domain multinode-454917-m02 has defined MAC address 52:54:00:0a:13:30 in network mk-multinode-454917
	I1008 14:50:13.124716  387929 main.go:141] libmachine: (multinode-454917-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:13:30", ip: ""} in network mk-multinode-454917: {Iface:virbr1 ExpiryTime:2025-10-08 15:48:44 +0000 UTC Type:0 Mac:52:54:00:0a:13:30 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:multinode-454917-m02 Clientid:01:52:54:00:0a:13:30}
	I1008 14:50:13.124742  387929 main.go:141] libmachine: (multinode-454917-m02) DBG | domain multinode-454917-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:0a:13:30 in network mk-multinode-454917
	I1008 14:50:13.124927  387929 main.go:141] libmachine: (multinode-454917-m02) Calling .GetSSHPort
	I1008 14:50:13.125148  387929 main.go:141] libmachine: (multinode-454917-m02) Calling .GetSSHKeyPath
	I1008 14:50:13.125322  387929 main.go:141] libmachine: (multinode-454917-m02) Calling .GetSSHUsername
	I1008 14:50:13.125511  387929 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21681-357044/.minikube/machines/multinode-454917-m02/id_rsa Username:docker}
	I1008 14:50:13.207622  387929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 14:50:13.224644  387929 status.go:176] multinode-454917-m02 status: &{Name:multinode-454917-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1008 14:50:13.224687  387929 status.go:174] checking status of multinode-454917-m03 ...
	I1008 14:50:13.225032  387929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:50:13.225081  387929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:50:13.239768  387929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45097
	I1008 14:50:13.240313  387929 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:50:13.240857  387929 main.go:141] libmachine: Using API Version  1
	I1008 14:50:13.240885  387929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:50:13.241299  387929 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:50:13.241508  387929 main.go:141] libmachine: (multinode-454917-m03) Calling .GetState
	I1008 14:50:13.243388  387929 status.go:371] multinode-454917-m03 host status = "Stopped" (err=<nil>)
	I1008 14:50:13.243402  387929 status.go:384] host is not running, skipping remaining checks
	I1008 14:50:13.243407  387929 status.go:176] multinode-454917-m03 status: &{Name:multinode-454917-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.59s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-454917 node start m03 -v=5 --alsologtostderr: (39.14161171s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (305.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-454917
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-454917
E1008 14:51:05.393433  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:51:36.609174  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/functional-882741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:53:02.314319  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-454917: (2m47.822802021s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-454917 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-454917 --wait=true -v=5 --alsologtostderr: (2m17.403819596s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-454917
--- PASS: TestMultiNode/serial/RestartKeepsNodes (305.33s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-454917 node delete m03: (2.307793641s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.90s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (163.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 stop
E1008 14:56:36.605820  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/functional-882741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 14:58:02.314088  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-454917 stop: (2m43.227720132s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-454917 status: exit status 7 (96.872128ms)

                                                
                                                
-- stdout --
	multinode-454917
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-454917-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-454917 status --alsologtostderr: exit status 7 (93.75584ms)

                                                
                                                
-- stdout --
	multinode-454917
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-454917-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 14:58:44.659998  390711 out.go:360] Setting OutFile to fd 1 ...
	I1008 14:58:44.660260  390711 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:58:44.660273  390711 out.go:374] Setting ErrFile to fd 2...
	I1008 14:58:44.660277  390711 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 14:58:44.660544  390711 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-357044/.minikube/bin
	I1008 14:58:44.660781  390711 out.go:368] Setting JSON to false
	I1008 14:58:44.660816  390711 mustload.go:65] Loading cluster: multinode-454917
	I1008 14:58:44.660886  390711 notify.go:220] Checking for updates...
	I1008 14:58:44.661462  390711 config.go:182] Loaded profile config "multinode-454917": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 14:58:44.661480  390711 status.go:174] checking status of multinode-454917 ...
	I1008 14:58:44.662074  390711 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:58:44.662119  390711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:58:44.681145  390711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33137
	I1008 14:58:44.681714  390711 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:58:44.682394  390711 main.go:141] libmachine: Using API Version  1
	I1008 14:58:44.682423  390711 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:58:44.682896  390711 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:58:44.683226  390711 main.go:141] libmachine: (multinode-454917) Calling .GetState
	I1008 14:58:44.685071  390711 status.go:371] multinode-454917 host status = "Stopped" (err=<nil>)
	I1008 14:58:44.685094  390711 status.go:384] host is not running, skipping remaining checks
	I1008 14:58:44.685102  390711 status.go:176] multinode-454917 status: &{Name:multinode-454917 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1008 14:58:44.685148  390711 status.go:174] checking status of multinode-454917-m02 ...
	I1008 14:58:44.685663  390711 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1008 14:58:44.685720  390711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1008 14:58:44.699717  390711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38363
	I1008 14:58:44.700409  390711 main.go:141] libmachine: () Calling .GetVersion
	I1008 14:58:44.700979  390711 main.go:141] libmachine: Using API Version  1
	I1008 14:58:44.701010  390711 main.go:141] libmachine: () Calling .SetConfigRaw
	I1008 14:58:44.701426  390711 main.go:141] libmachine: () Calling .GetMachineName
	I1008 14:58:44.701639  390711 main.go:141] libmachine: (multinode-454917-m02) Calling .GetState
	I1008 14:58:44.703460  390711 status.go:371] multinode-454917-m02 host status = "Stopped" (err=<nil>)
	I1008 14:58:44.703477  390711 status.go:384] host is not running, skipping remaining checks
	I1008 14:58:44.703484  390711 status.go:176] multinode-454917-m02 status: &{Name:multinode-454917-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (163.42s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (127.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-454917 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1008 14:59:39.672655  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/functional-882741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-454917 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (2m6.512894246s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-454917 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (127.07s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-454917
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-454917-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-454917-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (68.148092ms)

                                                
                                                
-- stdout --
	* [multinode-454917-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21681
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21681-357044/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-357044/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-454917-m02' is duplicated with machine name 'multinode-454917-m02' in profile 'multinode-454917'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-454917-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-454917-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (42.801971627s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-454917
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-454917: exit status 80 (229.48846ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-454917 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-454917-m03 already exists in multinode-454917-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-454917-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.02s)

                                                
                                    
x
+
TestScheduledStopUnix (110.96s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-710015 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-710015 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (39.162100899s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-710015 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-710015 -n scheduled-stop-710015
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-710015 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1008 15:05:00.024695  361915 retry.go:31] will retry after 133.449µs: open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/scheduled-stop-710015/pid: no such file or directory
I1008 15:05:00.025883  361915 retry.go:31] will retry after 85.09µs: open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/scheduled-stop-710015/pid: no such file or directory
I1008 15:05:00.027039  361915 retry.go:31] will retry after 182.987µs: open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/scheduled-stop-710015/pid: no such file or directory
I1008 15:05:00.028222  361915 retry.go:31] will retry after 263.136µs: open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/scheduled-stop-710015/pid: no such file or directory
I1008 15:05:00.029413  361915 retry.go:31] will retry after 615.805µs: open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/scheduled-stop-710015/pid: no such file or directory
I1008 15:05:00.030559  361915 retry.go:31] will retry after 810.359µs: open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/scheduled-stop-710015/pid: no such file or directory
I1008 15:05:00.031705  361915 retry.go:31] will retry after 1.435369ms: open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/scheduled-stop-710015/pid: no such file or directory
I1008 15:05:00.033921  361915 retry.go:31] will retry after 2.425287ms: open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/scheduled-stop-710015/pid: no such file or directory
I1008 15:05:00.037185  361915 retry.go:31] will retry after 3.813086ms: open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/scheduled-stop-710015/pid: no such file or directory
I1008 15:05:00.041435  361915 retry.go:31] will retry after 2.493785ms: open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/scheduled-stop-710015/pid: no such file or directory
I1008 15:05:00.044751  361915 retry.go:31] will retry after 7.83667ms: open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/scheduled-stop-710015/pid: no such file or directory
I1008 15:05:00.053060  361915 retry.go:31] will retry after 8.201082ms: open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/scheduled-stop-710015/pid: no such file or directory
I1008 15:05:00.062401  361915 retry.go:31] will retry after 10.116519ms: open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/scheduled-stop-710015/pid: no such file or directory
I1008 15:05:00.072656  361915 retry.go:31] will retry after 23.202046ms: open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/scheduled-stop-710015/pid: no such file or directory
I1008 15:05:00.096945  361915 retry.go:31] will retry after 42.504987ms: open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/scheduled-stop-710015/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-710015 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-710015 -n scheduled-stop-710015
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-710015
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-710015 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-710015
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-710015: exit status 7 (78.467879ms)

                                                
                                                
-- stdout --
	scheduled-stop-710015
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-710015 -n scheduled-stop-710015
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-710015 -n scheduled-stop-710015: exit status 7 (70.668915ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-710015" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-710015
--- PASS: TestScheduledStopUnix (110.96s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (121.15s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2889801728 start -p running-upgrade-280930 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2889801728 start -p running-upgrade-280930 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m12.004528562s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-280930 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-280930 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (45.08569801s)
helpers_test.go:175: Cleaning up "running-upgrade-280930" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-280930
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-280930: (1.408970659s)
--- PASS: TestRunningBinaryUpgrade (121.15s)

                                                
                                    
x
+
TestKubernetesUpgrade (143.49s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-074115 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-074115 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m8.361339575s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-074115
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-074115: (1.79686108s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-074115 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-074115 status --format={{.Host}}: exit status 7 (86.408132ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-074115 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-074115 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (54.528101763s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-074115 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-074115 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-074115 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 106 (93.937547ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-074115] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21681
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21681-357044/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-357044/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-074115
	    minikube start -p kubernetes-upgrade-074115 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0741152 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-074115 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-074115 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-074115 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (17.560356738s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-074115" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-074115
--- PASS: TestKubernetesUpgrade (143.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-694490 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-694490 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (89.107994ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-694490] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21681
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21681-357044/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-357044/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (96.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-694490 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1008 15:06:36.605795  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/functional-882741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-694490 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m36.62316474s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-694490 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (96.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-900200 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-900200 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (113.868894ms)

                                                
                                                
-- stdout --
	* [false-900200] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21681
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21681-357044/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-357044/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 15:06:58.194018  395697 out.go:360] Setting OutFile to fd 1 ...
	I1008 15:06:58.194297  395697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:06:58.194307  395697 out.go:374] Setting ErrFile to fd 2...
	I1008 15:06:58.194312  395697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 15:06:58.194522  395697 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-357044/.minikube/bin
	I1008 15:06:58.195039  395697 out.go:368] Setting JSON to false
	I1008 15:06:58.196062  395697 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6550,"bootTime":1759929468,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1008 15:06:58.196153  395697 start.go:141] virtualization: kvm guest
	I1008 15:06:58.198250  395697 out.go:179] * [false-900200] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1008 15:06:58.199715  395697 notify.go:220] Checking for updates...
	I1008 15:06:58.199733  395697 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 15:06:58.200984  395697 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 15:06:58.202242  395697 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-357044/kubeconfig
	I1008 15:06:58.203447  395697 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-357044/.minikube
	I1008 15:06:58.204554  395697 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1008 15:06:58.205879  395697 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 15:06:58.207756  395697 config.go:182] Loaded profile config "NoKubernetes-694490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:06:58.207900  395697 config.go:182] Loaded profile config "force-systemd-flag-732844": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:06:58.208056  395697 config.go:182] Loaded profile config "offline-crio-644334": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 15:06:58.208198  395697 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 15:06:58.250064  395697 out.go:179] * Using the kvm2 driver based on user configuration
	I1008 15:06:58.251310  395697 start.go:305] selected driver: kvm2
	I1008 15:06:58.251330  395697 start.go:925] validating driver "kvm2" against <nil>
	I1008 15:06:58.251364  395697 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 15:06:58.253736  395697 out.go:203] 
	W1008 15:06:58.254953  395697 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1008 15:06:58.256213  395697 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-900200 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-900200

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-900200

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-900200

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-900200

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-900200

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-900200

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-900200

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-900200

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-900200

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-900200

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900200"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900200"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900200"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-900200

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900200"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900200"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-900200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-900200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-900200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-900200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-900200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-900200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-900200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-900200" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900200"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900200"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900200"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900200"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900200"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-900200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-900200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-900200" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900200"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900200"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900200"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900200"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900200"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-900200

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900200"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900200"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900200"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900200"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900200"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900200"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900200"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900200"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900200"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900200"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900200"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900200"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900200"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900200"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900200"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900200"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900200"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-900200"

                                                
                                                
----------------------- debugLogs end: false-900200 [took: 3.506408104s] --------------------------------
helpers_test.go:175: Cleaning up "false-900200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-900200
--- PASS: TestNetworkPlugins/group/false (3.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (33.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-694490 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1008 15:08:02.309666  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-694490 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (32.788503869s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-694490 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-694490 status -o json: exit status 2 (262.279434ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-694490","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-694490
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (33.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (21.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-694490 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-694490 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (21.819403487s)
--- PASS: TestNoKubernetes/serial/Start (21.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-694490 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-694490 "sudo systemctl is-active --quiet service kubelet": exit status 1 (212.261362ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-694490
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-694490: (1.388132247s)
--- PASS: TestNoKubernetes/serial/Stop (1.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (53.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-694490 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-694490 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (53.331554145s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (53.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-694490 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-694490 "sudo systemctl is-active --quiet service kubelet": exit status 1 (213.025662ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestPause/serial/Start (90.45s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-783785 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-783785 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m30.452645791s)
--- PASS: TestPause/serial/Start (90.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.62s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (100.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3081831071 start -p stopped-upgrade-236862 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3081831071 start -p stopped-upgrade-236862 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (49.04694583s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3081831071 -p stopped-upgrade-236862 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3081831071 -p stopped-upgrade-236862 stop: (1.808736726s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-236862 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-236862 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (49.869230061s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (100.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (90.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-900200 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-900200 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m30.030404222s)
--- PASS: TestNetworkPlugins/group/auto/Start (90.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-236862
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-236862: (1.158926799s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (58.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-900200 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-900200 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (58.617628016s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (58.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-900200 "pgrep -a kubelet"
I1008 15:12:28.384946  361915 config.go:182] Loaded profile config "auto-900200": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-900200 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-h56fg" [fd594f05-755e-4018-b2d5-c639e1e33fc6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-h56fg" [fd594f05-755e-4018-b2d5-c639e1e33fc6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.005620115s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (96.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-900200 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-900200 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m36.024223751s)
--- PASS: TestNetworkPlugins/group/calico/Start (96.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-900200 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-900200 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-900200 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (84.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-900200 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-900200 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m24.098639138s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (84.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-vdwz9" [1768e9f2-c75b-40f4-9e19-b280a561a2cf] Running
E1008 15:13:02.309700  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005430108s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-900200 "pgrep -a kubelet"
I1008 15:13:03.174242  361915 config.go:182] Loaded profile config "kindnet-900200": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-900200 replace --force -f testdata/netcat-deployment.yaml
I1008 15:13:03.752053  361915 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1008 15:13:03.763232  361915 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jqhp7" [f9a76ca4-f641-46c1-be57-62784be2ed12] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jqhp7" [f9a76ca4-f641-46c1-be57-62784be2ed12] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.00470067s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-900200 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-900200 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-900200 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (84.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-900200 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-900200 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m24.139469524s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (84.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (76.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-900200 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-900200 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m16.306121011s)
--- PASS: TestNetworkPlugins/group/flannel/Start (76.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-xmncb" [ae07ffdc-0f3d-48d6-af7f-0eb0cd385d78] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006668994s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-900200 "pgrep -a kubelet"
I1008 15:14:15.350894  361915 config.go:182] Loaded profile config "calico-900200": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-900200 replace --force -f testdata/netcat-deployment.yaml
I1008 15:14:15.608914  361915 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4vnnc" [2c866f1e-5c2a-4d7f-8c08-33d0fab6beb6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4vnnc" [2c866f1e-5c2a-4d7f-8c08-33d0fab6beb6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.051310156s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-900200 "pgrep -a kubelet"
I1008 15:14:20.431722  361915 config.go:182] Loaded profile config "custom-flannel-900200": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-900200 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hhhm2" [d907cb75-e298-4908-9d53-5b78c14c9046] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hhhm2" [d907cb75-e298-4908-9d53-5b78c14c9046] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.005332187s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-900200 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-900200 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-900200 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-900200 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-900200 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-900200 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (84.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-900200 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-900200 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m24.388243762s)
--- PASS: TestNetworkPlugins/group/bridge/Start (84.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (112.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-462309 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-462309 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (1m52.836817767s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (112.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-900200 "pgrep -a kubelet"
I1008 15:14:58.096808  361915 config.go:182] Loaded profile config "enable-default-cni-900200": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-900200 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vcjh8" [52ec6264-4579-4868-9458-d6db96b1a6d5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-vcjh8" [52ec6264-4579-4868-9458-d6db96b1a6d5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.005200234s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-900200 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-900200 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-900200 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-7pd4b" [c676fa3d-40ca-4c8b-94aa-a09f50a47515] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006336562s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-900200 "pgrep -a kubelet"
I1008 15:15:19.385802  361915 config.go:182] Loaded profile config "flannel-900200": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-900200 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-m8wnk" [159c5af2-d269-4243-9536-62c35b677185] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-m8wnk" [159c5af2-d269-4243-9536-62c35b677185] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.005268978s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (108.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-280182 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-280182 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m48.094854239s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (108.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-900200 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-900200 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-900200 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (85.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-499258 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-499258 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m25.403926816s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (85.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-900200 "pgrep -a kubelet"
I1008 15:16:08.860207  361915 config.go:182] Loaded profile config "bridge-900200": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-900200 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lhbx8" [3761ee64-dc5b-4d30-95b7-9889ea1dbf76] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-lhbx8" [3761ee64-dc5b-4d30-95b7-9889ea1dbf76] Running
E1008 15:16:19.675070  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/functional-882741/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004502442s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-900200 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-900200 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-900200 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-934353 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-934353 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m21.876119561s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-462309 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [93262391-398e-434c-a23b-bdb6c41fea04] Pending
helpers_test.go:352: "busybox" [93262391-398e-434c-a23b-bdb6c41fea04] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [93262391-398e-434c-a23b-bdb6c41fea04] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004964811s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-462309 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-462309 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-462309 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.195774329s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-462309 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (75.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-462309 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-462309 --alsologtostderr -v=3: (1m15.616411732s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (75.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-280182 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [207fd124-f035-4c05-a9ef-5a0e849cbfc6] Pending
helpers_test.go:352: "busybox" [207fd124-f035-4c05-a9ef-5a0e849cbfc6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [207fd124-f035-4c05-a9ef-5a0e849cbfc6] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.005603396s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-280182 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-499258 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [bd0b32a5-acc0-4a26-b7e8-50656e213955] Pending
helpers_test.go:352: "busybox" [bd0b32a5-acc0-4a26-b7e8-50656e213955] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [bd0b32a5-acc0-4a26-b7e8-50656e213955] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.004665215s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-499258 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-280182 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-280182 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (87.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-280182 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-280182 --alsologtostderr -v=3: (1m27.084587373s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (87.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-499258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-499258 describe deploy/metrics-server -n kube-system
E1008 15:17:28.684616  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/auto-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:17:28.690971  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/auto-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:17:28.702447  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/auto-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (89.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-499258 --alsologtostderr -v=3
E1008 15:17:28.724654  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/auto-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:17:28.766915  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/auto-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:17:28.848449  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/auto-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:17:29.010235  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/auto-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:17:29.331595  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/auto-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:17:29.973282  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/auto-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:17:31.254704  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/auto-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:17:33.816131  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/auto-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:17:38.938315  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/auto-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:17:49.179997  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/auto-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:17:56.695460  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/kindnet-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:17:56.701969  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/kindnet-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:17:56.713451  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/kindnet-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:17:56.734937  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/kindnet-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:17:56.776431  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/kindnet-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:17:56.857949  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/kindnet-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:17:57.019632  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/kindnet-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:17:57.341401  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/kindnet-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:17:57.982825  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/kindnet-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:17:59.264505  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/kindnet-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-499258 --alsologtostderr -v=3: (1m29.484162986s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (89.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-934353 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f895a900-c62e-4a40-a68e-8493e3f9bff3] Pending
helpers_test.go:352: "busybox" [f895a900-c62e-4a40-a68e-8493e3f9bff3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1008 15:18:01.826831  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/kindnet-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:18:02.310049  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/addons-527125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [f895a900-c62e-4a40-a68e-8493e3f9bff3] Running
E1008 15:18:06.948886  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/kindnet-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.005194299s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-934353 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-934353 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1008 15:18:09.661841  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/auto-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-934353 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (86.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-934353 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-934353 --alsologtostderr -v=3: (1m26.802095317s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (86.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-462309 -n old-k8s-version-462309
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-462309 -n old-k8s-version-462309: exit status 7 (69.080891ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-462309 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (45.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-462309 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
E1008 15:18:17.190525  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/kindnet-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:18:37.672562  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/kindnet-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:18:50.623733  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/auto-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-462309 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (44.812608877s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-462309 -n old-k8s-version-462309
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (45.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-280182 -n no-preload-280182
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-280182 -n no-preload-280182: exit status 7 (78.903352ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-280182 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (57.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-280182 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-280182 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (56.767548911s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-280182 -n no-preload-280182
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (57.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (16.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-bmsrg" [c26509f9-32cf-46f7-bfe1-9239710318e2] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-bmsrg" [c26509f9-32cf-46f7-bfe1-9239710318e2] Running
E1008 15:19:09.117347  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/calico-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:19:09.123793  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/calico-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:19:09.135310  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/calico-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:19:09.157031  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/calico-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:19:09.198552  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/calico-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:19:09.280065  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/calico-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:19:09.441649  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/calico-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:19:09.763632  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/calico-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:19:10.405069  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/calico-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:19:11.687071  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/calico-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.005463554s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (16.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-499258 -n embed-certs-499258
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-499258 -n embed-certs-499258: exit status 7 (80.111581ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-499258 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (58.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-499258 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-499258 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (57.894619328s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-499258 -n embed-certs-499258
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (58.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-bmsrg" [c26509f9-32cf-46f7-bfe1-9239710318e2] Running
E1008 15:19:14.249266  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/calico-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004516792s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-462309 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-462309 image list --format=json
E1008 15:19:18.634794  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/kindnet-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-462309 --alsologtostderr -v=1
E1008 15:19:19.370831  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/calico-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-462309 -n old-k8s-version-462309
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-462309 -n old-k8s-version-462309: exit status 2 (300.22387ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-462309 -n old-k8s-version-462309
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-462309 -n old-k8s-version-462309: exit status 2 (281.012703ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-462309 --alsologtostderr -v=1
E1008 15:19:20.730979  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/custom-flannel-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:19:20.737461  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/custom-flannel-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:19:20.749002  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/custom-flannel-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:19:20.771000  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/custom-flannel-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:19:20.812835  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/custom-flannel-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:19:20.894412  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/custom-flannel-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:19:21.056137  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/custom-flannel-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-462309 -n old-k8s-version-462309
E1008 15:19:21.377810  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/custom-flannel-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-462309 -n old-k8s-version-462309
E1008 15:19:22.020235  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/custom-flannel-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (59.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-510105 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1008 15:19:25.864521  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/custom-flannel-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:19:29.612551  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/calico-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:19:30.985849  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/custom-flannel-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-510105 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (59.718249387s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (59.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-934353 -n default-k8s-diff-port-934353
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-934353 -n default-k8s-diff-port-934353: exit status 7 (94.565752ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-934353 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:244: (dbg) Done: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-934353 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: (1.052198485s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (1.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (63.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-934353 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1008 15:19:41.227599  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/custom-flannel-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:19:50.094507  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/calico-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-934353 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m2.729843983s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-934353 -n default-k8s-diff-port-934353
E1008 15:20:40.556160  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/kindnet-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (63.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (19.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-79kx4" [fe99181f-b39e-40b2-a42d-b4c1de2f013c] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-79kx4" [fe99181f-b39e-40b2-a42d-b4c1de2f013c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 19.005086572s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (19.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-px5ml" [09edfd7b-876e-42ec-9492-63aec94df13e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1008 15:19:58.330328  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/enable-default-cni-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:19:58.336865  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/enable-default-cni-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:19:58.348327  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/enable-default-cni-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:19:58.369805  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/enable-default-cni-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:19:58.411366  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/enable-default-cni-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:19:58.493387  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/enable-default-cni-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:19:58.654755  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/enable-default-cni-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:19:58.976089  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/enable-default-cni-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:19:59.617739  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/enable-default-cni-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:20:00.899199  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/enable-default-cni-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:20:01.709567  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/custom-flannel-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-px5ml" [09edfd7b-876e-42ec-9492-63aec94df13e] Running
E1008 15:20:03.460585  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/enable-default-cni-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.004451323s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-px5ml" [09edfd7b-876e-42ec-9492-63aec94df13e] Running
E1008 15:20:08.582392  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/enable-default-cni-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004852838s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-499258 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-79kx4" [fe99181f-b39e-40b2-a42d-b4c1de2f013c] Running
E1008 15:20:12.546038  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/auto-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004924127s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-280182 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
E1008 15:20:15.663788  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/flannel-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-499258 image list --format=json
E1008 15:20:13.090419  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/flannel-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:20:13.096916  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/flannel-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:20:13.108741  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/flannel-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:20:13.130254  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/flannel-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:20:13.174345  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/flannel-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-499258 --alsologtostderr -v=1
E1008 15:20:13.256620  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/flannel-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:20:13.418631  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/flannel-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:20:13.740338  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/flannel-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:20:14.381809  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/flannel-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-499258 --alsologtostderr -v=1: (1.206275794s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-499258 -n embed-certs-499258
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-499258 -n embed-certs-499258: exit status 2 (300.224906ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-499258 -n embed-certs-499258
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-499258 -n embed-certs-499258: exit status 2 (291.295557ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-499258 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-499258 --alsologtostderr -v=1: (1.261948342s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-499258 -n embed-certs-499258
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-499258 -n embed-certs-499258
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-280182 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-280182 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-280182 --alsologtostderr -v=1: (1.384154531s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-280182 -n no-preload-280182
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-280182 -n no-preload-280182: exit status 2 (525.438394ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-280182 -n no-preload-280182
E1008 15:20:18.225125  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/flannel-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-280182 -n no-preload-280182: exit status 2 (293.122146ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-280182 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-280182 -n no-preload-280182
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-280182 -n no-preload-280182
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-510105 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-510105 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.04050032s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-510105 --alsologtostderr -v=3
E1008 15:20:31.056632  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/calico-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 15:20:33.589345  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/flannel-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-510105 --alsologtostderr -v=3: (11.698094213s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-510105 -n newest-cni-510105
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-510105 -n newest-cni-510105: exit status 7 (82.569486ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-510105 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (34.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-510105 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1008 15:20:39.305144  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/enable-default-cni-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-510105 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (34.108543887s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-510105 -n newest-cni-510105
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (34.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-fnttn" [507a7e95-e1b1-4b88-9182-42a9da42032d] Running
E1008 15:20:42.671055  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/custom-flannel-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004009893s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-fnttn" [507a7e95-e1b1-4b88-9182-42a9da42032d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004098166s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-934353 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-934353 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-934353 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-934353 -n default-k8s-diff-port-934353
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-934353 -n default-k8s-diff-port-934353: exit status 2 (269.472657ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-934353 -n default-k8s-diff-port-934353
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-934353 -n default-k8s-diff-port-934353: exit status 2 (282.94047ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-934353 --alsologtostderr -v=1
E1008 15:20:54.071445  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/flannel-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-934353 -n default-k8s-diff-port-934353
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-934353 -n default-k8s-diff-port-934353
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-510105 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-510105 --alsologtostderr -v=1
E1008 15:21:11.717956  361915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-357044/.minikube/profiles/bridge-900200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-510105 -n newest-cni-510105
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-510105 -n newest-cni-510105: exit status 2 (252.098874ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-510105 -n newest-cni-510105
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-510105 -n newest-cni-510105: exit status 2 (249.378121ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-510105 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-510105 -n newest-cni-510105
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-510105 -n newest-cni-510105
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.55s)

                                                
                                    

Test skip (40/324)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.33
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
125 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
126 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
127 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
128 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
131 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
158 TestGvisorAddon 0
180 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
259 TestNetworkPlugins/group/kubenet 5.05
267 TestNetworkPlugins/group/cilium 4.33
275 TestStartStop/group/disable-driver-mounts 0.17
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.33s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-527125 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.33s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-900200 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-900200

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-900200

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-900200

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-900200

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-900200

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-900200

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-900200

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-900200

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-900200

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-900200

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900200"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900200"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900200"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-900200

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900200"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900200"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-900200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-900200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-900200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-900200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-900200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-900200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-900200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-900200" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900200"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900200"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900200"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900200"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900200"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-900200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-900200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-900200" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900200"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900200"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900200"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900200"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900200"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-900200

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900200"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900200"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900200"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900200"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900200"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900200"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900200"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900200"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900200"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900200"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900200"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900200"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900200"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900200"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900200"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900200"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900200"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-900200"

                                                
                                                
----------------------- debugLogs end: kubenet-900200 [took: 4.868644985s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-900200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-900200
--- SKIP: TestNetworkPlugins/group/kubenet (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-900200 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-900200

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-900200

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-900200

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-900200

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-900200

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-900200

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-900200

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-900200

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-900200

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-900200

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900200"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900200"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900200"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-900200

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900200"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900200"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-900200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-900200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-900200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-900200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-900200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-900200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-900200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-900200" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900200"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900200"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900200"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900200"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900200"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-900200

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-900200

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-900200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-900200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-900200

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-900200

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-900200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-900200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-900200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-900200" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-900200" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900200"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900200"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900200"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900200"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900200"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-900200

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900200"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900200"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900200"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900200"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900200"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900200"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900200"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900200"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900200"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900200"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900200"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900200"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900200"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900200"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900200"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900200"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900200"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-900200" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900200"

                                                
                                                
----------------------- debugLogs end: cilium-900200 [took: 4.159300639s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-900200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-900200
--- SKIP: TestNetworkPlugins/group/cilium (4.33s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-453098" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-453098
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard