Test Report: KVM_Linux_crio 21625

                    
                      f5ddb069c61c98d891ee28fed061fe1ee97ea306:2025-10-03:41753
                    
                

Test fail (2/329)

Order failed test Duration
37 TestAddons/parallel/Ingress 159.83
243 TestPreload 160.39
x
+
TestAddons/parallel/Ingress (159.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-925003 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-925003 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-925003 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [0f7233ef-095e-446e-9144-25bb30b59449] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [0f7233ef-095e-446e-9144-25bb30b59449] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003717484s
I1003 17:47:19.994625   12564 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-925003 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-925003 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m16.324343066s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-925003 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-925003 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.143
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-925003 -n addons-925003
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-925003 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-925003 logs -n 25: (1.407153602s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-519501                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-519501 │ jenkins │ v1.37.0 │ 03 Oct 25 17:43 UTC │ 03 Oct 25 17:43 UTC │
	│ start   │ --download-only -p binary-mirror-473056 --alsologtostderr --binary-mirror http://127.0.0.1:38937 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-473056 │ jenkins │ v1.37.0 │ 03 Oct 25 17:43 UTC │                     │
	│ delete  │ -p binary-mirror-473056                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-473056 │ jenkins │ v1.37.0 │ 03 Oct 25 17:43 UTC │ 03 Oct 25 17:43 UTC │
	│ addons  │ disable dashboard -p addons-925003                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-925003        │ jenkins │ v1.37.0 │ 03 Oct 25 17:43 UTC │                     │
	│ addons  │ enable dashboard -p addons-925003                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-925003        │ jenkins │ v1.37.0 │ 03 Oct 25 17:43 UTC │                     │
	│ start   │ -p addons-925003 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-925003        │ jenkins │ v1.37.0 │ 03 Oct 25 17:43 UTC │ 03 Oct 25 17:46 UTC │
	│ addons  │ addons-925003 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-925003        │ jenkins │ v1.37.0 │ 03 Oct 25 17:46 UTC │ 03 Oct 25 17:46 UTC │
	│ addons  │ addons-925003 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-925003        │ jenkins │ v1.37.0 │ 03 Oct 25 17:46 UTC │ 03 Oct 25 17:46 UTC │
	│ addons  │ enable headlamp -p addons-925003 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-925003        │ jenkins │ v1.37.0 │ 03 Oct 25 17:46 UTC │ 03 Oct 25 17:46 UTC │
	│ addons  │ addons-925003 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-925003        │ jenkins │ v1.37.0 │ 03 Oct 25 17:47 UTC │ 03 Oct 25 17:47 UTC │
	│ addons  │ addons-925003 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-925003        │ jenkins │ v1.37.0 │ 03 Oct 25 17:47 UTC │ 03 Oct 25 17:47 UTC │
	│ addons  │ addons-925003 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-925003        │ jenkins │ v1.37.0 │ 03 Oct 25 17:47 UTC │ 03 Oct 25 17:47 UTC │
	│ addons  │ addons-925003 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-925003        │ jenkins │ v1.37.0 │ 03 Oct 25 17:47 UTC │ 03 Oct 25 17:47 UTC │
	│ ip      │ addons-925003 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-925003        │ jenkins │ v1.37.0 │ 03 Oct 25 17:47 UTC │ 03 Oct 25 17:47 UTC │
	│ addons  │ addons-925003 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-925003        │ jenkins │ v1.37.0 │ 03 Oct 25 17:47 UTC │ 03 Oct 25 17:47 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-925003                                                                                                                                                                                                                                                                                                                                                                                         │ addons-925003        │ jenkins │ v1.37.0 │ 03 Oct 25 17:47 UTC │ 03 Oct 25 17:47 UTC │
	│ addons  │ addons-925003 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-925003        │ jenkins │ v1.37.0 │ 03 Oct 25 17:47 UTC │ 03 Oct 25 17:47 UTC │
	│ addons  │ addons-925003 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-925003        │ jenkins │ v1.37.0 │ 03 Oct 25 17:47 UTC │ 03 Oct 25 17:47 UTC │
	│ ssh     │ addons-925003 ssh cat /opt/local-path-provisioner/pvc-6c53a42d-a019-4ceb-9ee4-98d0f0f5ced2_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-925003        │ jenkins │ v1.37.0 │ 03 Oct 25 17:47 UTC │ 03 Oct 25 17:47 UTC │
	│ addons  │ addons-925003 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-925003        │ jenkins │ v1.37.0 │ 03 Oct 25 17:47 UTC │ 03 Oct 25 17:48 UTC │
	│ ssh     │ addons-925003 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-925003        │ jenkins │ v1.37.0 │ 03 Oct 25 17:47 UTC │                     │
	│ addons  │ addons-925003 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-925003        │ jenkins │ v1.37.0 │ 03 Oct 25 17:47 UTC │ 03 Oct 25 17:47 UTC │
	│ addons  │ addons-925003 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-925003        │ jenkins │ v1.37.0 │ 03 Oct 25 17:47 UTC │ 03 Oct 25 17:47 UTC │
	│ addons  │ addons-925003 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-925003        │ jenkins │ v1.37.0 │ 03 Oct 25 17:47 UTC │ 03 Oct 25 17:48 UTC │
	│ ip      │ addons-925003 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-925003        │ jenkins │ v1.37.0 │ 03 Oct 25 17:49 UTC │ 03 Oct 25 17:49 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 17:43:13
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 17:43:13.501436   13237 out.go:360] Setting OutFile to fd 1 ...
	I1003 17:43:13.501683   13237 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 17:43:13.501695   13237 out.go:374] Setting ErrFile to fd 2...
	I1003 17:43:13.501700   13237 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 17:43:13.501976   13237 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8656/.minikube/bin
	I1003 17:43:13.502499   13237 out.go:368] Setting JSON to false
	I1003 17:43:13.503329   13237 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1538,"bootTime":1759511856,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 17:43:13.503419   13237 start.go:140] virtualization: kvm guest
	I1003 17:43:13.505617   13237 out.go:179] * [addons-925003] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1003 17:43:13.507150   13237 notify.go:220] Checking for updates...
	I1003 17:43:13.507250   13237 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 17:43:13.508904   13237 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:43:13.510453   13237 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8656/kubeconfig
	I1003 17:43:13.511890   13237 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8656/.minikube
	I1003 17:43:13.513432   13237 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 17:43:13.514927   13237 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:43:13.516495   13237 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 17:43:13.548095   13237 out.go:179] * Using the kvm2 driver based on user configuration
	I1003 17:43:13.549288   13237 start.go:304] selected driver: kvm2
	I1003 17:43:13.549306   13237 start.go:924] validating driver "kvm2" against <nil>
	I1003 17:43:13.549318   13237 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:43:13.550130   13237 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1003 17:43:13.550366   13237 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 17:43:13.550390   13237 cni.go:84] Creating CNI manager for ""
	I1003 17:43:13.550448   13237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1003 17:43:13.550463   13237 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1003 17:43:13.550505   13237 start.go:348] cluster config:
	{Name:addons-925003 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-925003 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1003 17:43:13.550631   13237 iso.go:125] acquiring lock: {Name:mk4ce219bd5cf5058f69eb8b10ebc9d907f5f7b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:43:13.553040   13237 out.go:179] * Starting "addons-925003" primary control-plane node in "addons-925003" cluster
	I1003 17:43:13.554364   13237 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 17:43:13.554400   13237 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-8656/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1003 17:43:13.554408   13237 cache.go:58] Caching tarball of preloaded images
	I1003 17:43:13.554528   13237 preload.go:233] Found /home/jenkins/minikube-integration/21625-8656/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1003 17:43:13.554544   13237 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 17:43:13.554929   13237 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/config.json ...
	I1003 17:43:13.554955   13237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/config.json: {Name:mk11d2004fee98e29c60ca146ce7351cd0cf606c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:43:13.555121   13237 start.go:360] acquireMachinesLock for addons-925003: {Name:mk6fc4b452aa995b01198c8d80bd9bad940152be Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 17:43:13.555194   13237 start.go:364] duration metric: took 54.317µs to acquireMachinesLock for "addons-925003"
	I1003 17:43:13.555219   13237 start.go:93] Provisioning new machine with config: &{Name:addons-925003 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-925003 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 17:43:13.555284   13237 start.go:125] createHost starting for "" (driver="kvm2")
	I1003 17:43:13.557219   13237 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1003 17:43:13.557400   13237 start.go:159] libmachine.API.Create for "addons-925003" (driver="kvm2")
	I1003 17:43:13.557434   13237 client.go:168] LocalClient.Create starting
	I1003 17:43:13.557551   13237 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21625-8656/.minikube/certs/ca.pem
	I1003 17:43:13.892901   13237 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21625-8656/.minikube/certs/cert.pem
	I1003 17:43:14.350185   13237 main.go:141] libmachine: creating domain...
	I1003 17:43:14.350211   13237 main.go:141] libmachine: creating network...
	I1003 17:43:14.351872   13237 main.go:141] libmachine: found existing default network
	I1003 17:43:14.352085   13237 main.go:141] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1003 17:43:14.352632   13237 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f1aa00}
	I1003 17:43:14.352719   13237 main.go:141] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-925003</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1003 17:43:14.358841   13237 main.go:141] libmachine: creating private network mk-addons-925003 192.168.39.0/24...
	I1003 17:43:14.428058   13237 main.go:141] libmachine: private network mk-addons-925003 192.168.39.0/24 created
	I1003 17:43:14.428387   13237 main.go:141] libmachine: <network>
	  <name>mk-addons-925003</name>
	  <uuid>4dc9a98c-b32d-4801-b867-a652bd496cfb</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:8c:db:62'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1003 17:43:14.428447   13237 main.go:141] libmachine: setting up store path in /home/jenkins/minikube-integration/21625-8656/.minikube/machines/addons-925003 ...
	I1003 17:43:14.428470   13237 main.go:141] libmachine: building disk image from file:///home/jenkins/minikube-integration/21625-8656/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1003 17:43:14.428481   13237 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21625-8656/.minikube
	I1003 17:43:14.428585   13237 main.go:141] libmachine: Downloading /home/jenkins/minikube-integration/21625-8656/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21625-8656/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I1003 17:43:14.715826   13237 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21625-8656/.minikube/machines/addons-925003/id_rsa...
	I1003 17:43:14.752515   13237 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21625-8656/.minikube/machines/addons-925003/addons-925003.rawdisk...
	I1003 17:43:14.752560   13237 main.go:141] libmachine: Writing magic tar header
	I1003 17:43:14.752579   13237 main.go:141] libmachine: Writing SSH key tar header
	I1003 17:43:14.752663   13237 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21625-8656/.minikube/machines/addons-925003 ...
	I1003 17:43:14.752740   13237 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21625-8656/.minikube/machines/addons-925003
	I1003 17:43:14.752766   13237 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21625-8656/.minikube/machines/addons-925003 (perms=drwx------)
	I1003 17:43:14.752795   13237 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21625-8656/.minikube/machines
	I1003 17:43:14.752816   13237 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21625-8656/.minikube/machines (perms=drwxr-xr-x)
	I1003 17:43:14.752835   13237 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21625-8656/.minikube
	I1003 17:43:14.752848   13237 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21625-8656/.minikube (perms=drwxr-xr-x)
	I1003 17:43:14.752863   13237 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21625-8656
	I1003 17:43:14.752878   13237 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21625-8656 (perms=drwxrwxr-x)
	I1003 17:43:14.752891   13237 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1003 17:43:14.752907   13237 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1003 17:43:14.752925   13237 main.go:141] libmachine: checking permissions on dir: /home/jenkins
	I1003 17:43:14.752941   13237 main.go:141] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1003 17:43:14.752964   13237 main.go:141] libmachine: checking permissions on dir: /home
	I1003 17:43:14.752974   13237 main.go:141] libmachine: skipping /home - not owner
	I1003 17:43:14.752980   13237 main.go:141] libmachine: defining domain...
	I1003 17:43:14.754332   13237 main.go:141] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-925003</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21625-8656/.minikube/machines/addons-925003/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21625-8656/.minikube/machines/addons-925003/addons-925003.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-925003'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1003 17:43:14.762452   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:28:21:75 in network default
	I1003 17:43:14.763138   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:14.763156   13237 main.go:141] libmachine: starting domain...
	I1003 17:43:14.763161   13237 main.go:141] libmachine: ensuring networks are active...
	I1003 17:43:14.764036   13237 main.go:141] libmachine: Ensuring network default is active
	I1003 17:43:14.764408   13237 main.go:141] libmachine: Ensuring network mk-addons-925003 is active
	I1003 17:43:14.765003   13237 main.go:141] libmachine: getting domain XML...
	I1003 17:43:14.766115   13237 main.go:141] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-925003</name>
	  <uuid>4c28fc72-6c07-4f6b-a9be-eecabdcc7e06</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21625-8656/.minikube/machines/addons-925003/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21625-8656/.minikube/machines/addons-925003/addons-925003.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:df:f9:b5'/>
	      <source network='mk-addons-925003'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:28:21:75'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1003 17:43:16.097570   13237 main.go:141] libmachine: waiting for domain to start...
	I1003 17:43:16.099123   13237 main.go:141] libmachine: domain is now running
	I1003 17:43:16.099148   13237 main.go:141] libmachine: waiting for IP...
	I1003 17:43:16.099980   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:16.100611   13237 main.go:141] libmachine: no network interface addresses found for domain addons-925003 (source=lease)
	I1003 17:43:16.100626   13237 main.go:141] libmachine: trying to list again with source=arp
	I1003 17:43:16.100998   13237 main.go:141] libmachine: unable to find current IP address of domain addons-925003 in network mk-addons-925003 (interfaces detected: [])
	I1003 17:43:16.101038   13237 retry.go:31] will retry after 204.466304ms: waiting for domain to come up
	I1003 17:43:16.307482   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:16.308082   13237 main.go:141] libmachine: no network interface addresses found for domain addons-925003 (source=lease)
	I1003 17:43:16.308099   13237 main.go:141] libmachine: trying to list again with source=arp
	I1003 17:43:16.308392   13237 main.go:141] libmachine: unable to find current IP address of domain addons-925003 in network mk-addons-925003 (interfaces detected: [])
	I1003 17:43:16.308426   13237 retry.go:31] will retry after 348.879718ms: waiting for domain to come up
	I1003 17:43:16.659227   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:16.659768   13237 main.go:141] libmachine: no network interface addresses found for domain addons-925003 (source=lease)
	I1003 17:43:16.659800   13237 main.go:141] libmachine: trying to list again with source=arp
	I1003 17:43:16.660148   13237 main.go:141] libmachine: unable to find current IP address of domain addons-925003 in network mk-addons-925003 (interfaces detected: [])
	I1003 17:43:16.660183   13237 retry.go:31] will retry after 467.967023ms: waiting for domain to come up
	I1003 17:43:17.129860   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:17.130523   13237 main.go:141] libmachine: no network interface addresses found for domain addons-925003 (source=lease)
	I1003 17:43:17.130537   13237 main.go:141] libmachine: trying to list again with source=arp
	I1003 17:43:17.131006   13237 main.go:141] libmachine: unable to find current IP address of domain addons-925003 in network mk-addons-925003 (interfaces detected: [])
	I1003 17:43:17.131041   13237 retry.go:31] will retry after 427.825093ms: waiting for domain to come up
	I1003 17:43:17.561095   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:17.561873   13237 main.go:141] libmachine: no network interface addresses found for domain addons-925003 (source=lease)
	I1003 17:43:17.561895   13237 main.go:141] libmachine: trying to list again with source=arp
	I1003 17:43:17.562260   13237 main.go:141] libmachine: unable to find current IP address of domain addons-925003 in network mk-addons-925003 (interfaces detected: [])
	I1003 17:43:17.562297   13237 retry.go:31] will retry after 598.5999ms: waiting for domain to come up
	I1003 17:43:18.162217   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:18.162899   13237 main.go:141] libmachine: no network interface addresses found for domain addons-925003 (source=lease)
	I1003 17:43:18.162917   13237 main.go:141] libmachine: trying to list again with source=arp
	I1003 17:43:18.163304   13237 main.go:141] libmachine: unable to find current IP address of domain addons-925003 in network mk-addons-925003 (interfaces detected: [])
	I1003 17:43:18.163348   13237 retry.go:31] will retry after 893.602292ms: waiting for domain to come up
	I1003 17:43:19.058361   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:19.058921   13237 main.go:141] libmachine: no network interface addresses found for domain addons-925003 (source=lease)
	I1003 17:43:19.058939   13237 main.go:141] libmachine: trying to list again with source=arp
	I1003 17:43:19.059267   13237 main.go:141] libmachine: unable to find current IP address of domain addons-925003 in network mk-addons-925003 (interfaces detected: [])
	I1003 17:43:19.059304   13237 retry.go:31] will retry after 814.437981ms: waiting for domain to come up
	I1003 17:43:19.875210   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:19.875700   13237 main.go:141] libmachine: no network interface addresses found for domain addons-925003 (source=lease)
	I1003 17:43:19.875712   13237 main.go:141] libmachine: trying to list again with source=arp
	I1003 17:43:19.876007   13237 main.go:141] libmachine: unable to find current IP address of domain addons-925003 in network mk-addons-925003 (interfaces detected: [])
	I1003 17:43:19.876037   13237 retry.go:31] will retry after 1.179449595s: waiting for domain to come up
	I1003 17:43:21.057702   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:21.058647   13237 main.go:141] libmachine: no network interface addresses found for domain addons-925003 (source=lease)
	I1003 17:43:21.058670   13237 main.go:141] libmachine: trying to list again with source=arp
	I1003 17:43:21.059059   13237 main.go:141] libmachine: unable to find current IP address of domain addons-925003 in network mk-addons-925003 (interfaces detected: [])
	I1003 17:43:21.059096   13237 retry.go:31] will retry after 1.815423156s: waiting for domain to come up
	I1003 17:43:22.877046   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:22.877644   13237 main.go:141] libmachine: no network interface addresses found for domain addons-925003 (source=lease)
	I1003 17:43:22.877660   13237 main.go:141] libmachine: trying to list again with source=arp
	I1003 17:43:22.877986   13237 main.go:141] libmachine: unable to find current IP address of domain addons-925003 in network mk-addons-925003 (interfaces detected: [])
	I1003 17:43:22.878024   13237 retry.go:31] will retry after 1.726623549s: waiting for domain to come up
	I1003 17:43:24.606175   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:24.606902   13237 main.go:141] libmachine: no network interface addresses found for domain addons-925003 (source=lease)
	I1003 17:43:24.606920   13237 main.go:141] libmachine: trying to list again with source=arp
	I1003 17:43:24.607246   13237 main.go:141] libmachine: unable to find current IP address of domain addons-925003 in network mk-addons-925003 (interfaces detected: [])
	I1003 17:43:24.607280   13237 retry.go:31] will retry after 2.673688645s: waiting for domain to come up
	I1003 17:43:27.284091   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:27.284656   13237 main.go:141] libmachine: no network interface addresses found for domain addons-925003 (source=lease)
	I1003 17:43:27.284672   13237 main.go:141] libmachine: trying to list again with source=arp
	I1003 17:43:27.284989   13237 main.go:141] libmachine: unable to find current IP address of domain addons-925003 in network mk-addons-925003 (interfaces detected: [])
	I1003 17:43:27.285022   13237 retry.go:31] will retry after 2.253690094s: waiting for domain to come up
	I1003 17:43:29.540952   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:29.541462   13237 main.go:141] libmachine: no network interface addresses found for domain addons-925003 (source=lease)
	I1003 17:43:29.541473   13237 main.go:141] libmachine: trying to list again with source=arp
	I1003 17:43:29.541808   13237 main.go:141] libmachine: unable to find current IP address of domain addons-925003 in network mk-addons-925003 (interfaces detected: [])
	I1003 17:43:29.541847   13237 retry.go:31] will retry after 3.069135936s: waiting for domain to come up
	I1003 17:43:32.615119   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:32.615934   13237 main.go:141] libmachine: domain addons-925003 has current primary IP address 192.168.39.143 and MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:32.615952   13237 main.go:141] libmachine: found domain IP: 192.168.39.143
	I1003 17:43:32.615960   13237 main.go:141] libmachine: reserving static IP address...
	I1003 17:43:32.616473   13237 main.go:141] libmachine: unable to find host DHCP lease matching {name: "addons-925003", mac: "52:54:00:df:f9:b5", ip: "192.168.39.143"} in network mk-addons-925003
	I1003 17:43:32.810536   13237 main.go:141] libmachine: reserved static IP address 192.168.39.143 for domain addons-925003
	I1003 17:43:32.810560   13237 main.go:141] libmachine: waiting for SSH...
	I1003 17:43:32.810583   13237 main.go:141] libmachine: Getting to WaitForSSH function...
	I1003 17:43:32.813652   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:32.814186   13237 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:df:f9:b5", ip: ""} in network mk-addons-925003: {Iface:virbr1 ExpiryTime:2025-10-03 18:43:29 +0000 UTC Type:0 Mac:52:54:00:df:f9:b5 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:minikube Clientid:01:52:54:00:df:f9:b5}
	I1003 17:43:32.814212   13237 main.go:141] libmachine: domain addons-925003 has defined IP address 192.168.39.143 and MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:32.814497   13237 main.go:141] libmachine: Using SSH client type: native
	I1003 17:43:32.814800   13237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.143 22 <nil> <nil>}
	I1003 17:43:32.814813   13237 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1003 17:43:32.928834   13237 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 17:43:32.929333   13237 main.go:141] libmachine: domain creation complete
	I1003 17:43:32.931095   13237 machine.go:93] provisionDockerMachine start ...
	I1003 17:43:32.934752   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:32.935268   13237 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:df:f9:b5", ip: ""} in network mk-addons-925003: {Iface:virbr1 ExpiryTime:2025-10-03 18:43:29 +0000 UTC Type:0 Mac:52:54:00:df:f9:b5 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:addons-925003 Clientid:01:52:54:00:df:f9:b5}
	I1003 17:43:32.935300   13237 main.go:141] libmachine: domain addons-925003 has defined IP address 192.168.39.143 and MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:32.935468   13237 main.go:141] libmachine: Using SSH client type: native
	I1003 17:43:32.935694   13237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.143 22 <nil> <nil>}
	I1003 17:43:32.935707   13237 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 17:43:33.045871   13237 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1003 17:43:33.045901   13237 buildroot.go:166] provisioning hostname "addons-925003"
	I1003 17:43:33.049730   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:33.050353   13237 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:df:f9:b5", ip: ""} in network mk-addons-925003: {Iface:virbr1 ExpiryTime:2025-10-03 18:43:29 +0000 UTC Type:0 Mac:52:54:00:df:f9:b5 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:addons-925003 Clientid:01:52:54:00:df:f9:b5}
	I1003 17:43:33.050393   13237 main.go:141] libmachine: domain addons-925003 has defined IP address 192.168.39.143 and MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:33.050634   13237 main.go:141] libmachine: Using SSH client type: native
	I1003 17:43:33.050894   13237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.143 22 <nil> <nil>}
	I1003 17:43:33.050911   13237 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-925003 && echo "addons-925003" | sudo tee /etc/hostname
	I1003 17:43:33.178316   13237 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-925003
	
	I1003 17:43:33.181619   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:33.182238   13237 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:df:f9:b5", ip: ""} in network mk-addons-925003: {Iface:virbr1 ExpiryTime:2025-10-03 18:43:29 +0000 UTC Type:0 Mac:52:54:00:df:f9:b5 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:addons-925003 Clientid:01:52:54:00:df:f9:b5}
	I1003 17:43:33.182274   13237 main.go:141] libmachine: domain addons-925003 has defined IP address 192.168.39.143 and MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:33.182489   13237 main.go:141] libmachine: Using SSH client type: native
	I1003 17:43:33.182738   13237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.143 22 <nil> <nil>}
	I1003 17:43:33.182755   13237 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-925003' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-925003/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-925003' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 17:43:33.304526   13237 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 17:43:33.304558   13237 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21625-8656/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-8656/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-8656/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-8656/.minikube}
	I1003 17:43:33.304631   13237 buildroot.go:174] setting up certificates
	I1003 17:43:33.304662   13237 provision.go:84] configureAuth start
	I1003 17:43:33.307050   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:33.307536   13237 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:df:f9:b5", ip: ""} in network mk-addons-925003: {Iface:virbr1 ExpiryTime:2025-10-03 18:43:29 +0000 UTC Type:0 Mac:52:54:00:df:f9:b5 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:addons-925003 Clientid:01:52:54:00:df:f9:b5}
	I1003 17:43:33.307565   13237 main.go:141] libmachine: domain addons-925003 has defined IP address 192.168.39.143 and MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:33.310444   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:33.310805   13237 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:df:f9:b5", ip: ""} in network mk-addons-925003: {Iface:virbr1 ExpiryTime:2025-10-03 18:43:29 +0000 UTC Type:0 Mac:52:54:00:df:f9:b5 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:addons-925003 Clientid:01:52:54:00:df:f9:b5}
	I1003 17:43:33.310832   13237 main.go:141] libmachine: domain addons-925003 has defined IP address 192.168.39.143 and MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:33.311014   13237 provision.go:143] copyHostCerts
	I1003 17:43:33.311095   13237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8656/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-8656/.minikube/ca.pem (1078 bytes)
	I1003 17:43:33.311221   13237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8656/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-8656/.minikube/cert.pem (1123 bytes)
	I1003 17:43:33.311308   13237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8656/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-8656/.minikube/key.pem (1679 bytes)
	I1003 17:43:33.311377   13237 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-8656/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-8656/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-8656/.minikube/certs/ca-key.pem org=jenkins.addons-925003 san=[127.0.0.1 192.168.39.143 addons-925003 localhost minikube]
	I1003 17:43:33.542360   13237 provision.go:177] copyRemoteCerts
	I1003 17:43:33.542431   13237 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 17:43:33.545320   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:33.545670   13237 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:df:f9:b5", ip: ""} in network mk-addons-925003: {Iface:virbr1 ExpiryTime:2025-10-03 18:43:29 +0000 UTC Type:0 Mac:52:54:00:df:f9:b5 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:addons-925003 Clientid:01:52:54:00:df:f9:b5}
	I1003 17:43:33.545693   13237 main.go:141] libmachine: domain addons-925003 has defined IP address 192.168.39.143 and MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:33.545855   13237 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/addons-925003/id_rsa Username:docker}
	I1003 17:43:33.632838   13237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8656/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1003 17:43:33.663968   13237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8656/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1003 17:43:33.694913   13237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8656/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 17:43:33.726176   13237 provision.go:87] duration metric: took 421.498589ms to configureAuth
	I1003 17:43:33.726206   13237 buildroot.go:189] setting minikube options for container-runtime
	I1003 17:43:33.726426   13237 config.go:182] Loaded profile config "addons-925003": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 17:43:33.729687   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:33.730250   13237 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:df:f9:b5", ip: ""} in network mk-addons-925003: {Iface:virbr1 ExpiryTime:2025-10-03 18:43:29 +0000 UTC Type:0 Mac:52:54:00:df:f9:b5 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:addons-925003 Clientid:01:52:54:00:df:f9:b5}
	I1003 17:43:33.730287   13237 main.go:141] libmachine: domain addons-925003 has defined IP address 192.168.39.143 and MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:33.730607   13237 main.go:141] libmachine: Using SSH client type: native
	I1003 17:43:33.730851   13237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.143 22 <nil> <nil>}
	I1003 17:43:33.730871   13237 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 17:43:34.216088   13237 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 17:43:34.216127   13237 machine.go:96] duration metric: took 1.285013678s to provisionDockerMachine
	I1003 17:43:34.216143   13237 client.go:171] duration metric: took 20.658702404s to LocalClient.Create
	I1003 17:43:34.216167   13237 start.go:167] duration metric: took 20.658767712s to libmachine.API.Create "addons-925003"
	I1003 17:43:34.216183   13237 start.go:293] postStartSetup for "addons-925003" (driver="kvm2")
	I1003 17:43:34.216197   13237 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 17:43:34.216280   13237 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 17:43:34.219178   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:34.219649   13237 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:df:f9:b5", ip: ""} in network mk-addons-925003: {Iface:virbr1 ExpiryTime:2025-10-03 18:43:29 +0000 UTC Type:0 Mac:52:54:00:df:f9:b5 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:addons-925003 Clientid:01:52:54:00:df:f9:b5}
	I1003 17:43:34.219680   13237 main.go:141] libmachine: domain addons-925003 has defined IP address 192.168.39.143 and MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:34.219896   13237 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/addons-925003/id_rsa Username:docker}
	I1003 17:43:34.306521   13237 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 17:43:34.311745   13237 info.go:137] Remote host: Buildroot 2025.02
	I1003 17:43:34.311776   13237 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8656/.minikube/addons for local assets ...
	I1003 17:43:34.311870   13237 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8656/.minikube/files for local assets ...
	I1003 17:43:34.311894   13237 start.go:296] duration metric: took 95.703924ms for postStartSetup
	I1003 17:43:34.344462   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:34.345000   13237 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:df:f9:b5", ip: ""} in network mk-addons-925003: {Iface:virbr1 ExpiryTime:2025-10-03 18:43:29 +0000 UTC Type:0 Mac:52:54:00:df:f9:b5 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:addons-925003 Clientid:01:52:54:00:df:f9:b5}
	I1003 17:43:34.345040   13237 main.go:141] libmachine: domain addons-925003 has defined IP address 192.168.39.143 and MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:34.345270   13237 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/config.json ...
	I1003 17:43:34.405979   13237 start.go:128] duration metric: took 20.85067691s to createHost
	I1003 17:43:34.409029   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:34.409416   13237 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:df:f9:b5", ip: ""} in network mk-addons-925003: {Iface:virbr1 ExpiryTime:2025-10-03 18:43:29 +0000 UTC Type:0 Mac:52:54:00:df:f9:b5 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:addons-925003 Clientid:01:52:54:00:df:f9:b5}
	I1003 17:43:34.409440   13237 main.go:141] libmachine: domain addons-925003 has defined IP address 192.168.39.143 and MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:34.409609   13237 main.go:141] libmachine: Using SSH client type: native
	I1003 17:43:34.409860   13237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.143 22 <nil> <nil>}
	I1003 17:43:34.409876   13237 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1003 17:43:34.519330   13237 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759513414.484469867
	
	I1003 17:43:34.519355   13237 fix.go:216] guest clock: 1759513414.484469867
	I1003 17:43:34.519363   13237 fix.go:229] Guest: 2025-10-03 17:43:34.484469867 +0000 UTC Remote: 2025-10-03 17:43:34.406008155 +0000 UTC m=+20.952188578 (delta=78.461712ms)
	I1003 17:43:34.519377   13237 fix.go:200] guest clock delta is within tolerance: 78.461712ms
	I1003 17:43:34.519382   13237 start.go:83] releasing machines lock for "addons-925003", held for 20.964176001s
	I1003 17:43:34.522581   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:34.523051   13237 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:df:f9:b5", ip: ""} in network mk-addons-925003: {Iface:virbr1 ExpiryTime:2025-10-03 18:43:29 +0000 UTC Type:0 Mac:52:54:00:df:f9:b5 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:addons-925003 Clientid:01:52:54:00:df:f9:b5}
	I1003 17:43:34.523079   13237 main.go:141] libmachine: domain addons-925003 has defined IP address 192.168.39.143 and MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:34.523715   13237 ssh_runner.go:195] Run: cat /version.json
	I1003 17:43:34.523858   13237 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 17:43:34.526832   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:34.527230   13237 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:df:f9:b5", ip: ""} in network mk-addons-925003: {Iface:virbr1 ExpiryTime:2025-10-03 18:43:29 +0000 UTC Type:0 Mac:52:54:00:df:f9:b5 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:addons-925003 Clientid:01:52:54:00:df:f9:b5}
	I1003 17:43:34.527262   13237 main.go:141] libmachine: domain addons-925003 has defined IP address 192.168.39.143 and MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:34.527445   13237 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/addons-925003/id_rsa Username:docker}
	I1003 17:43:34.527493   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:34.527937   13237 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:df:f9:b5", ip: ""} in network mk-addons-925003: {Iface:virbr1 ExpiryTime:2025-10-03 18:43:29 +0000 UTC Type:0 Mac:52:54:00:df:f9:b5 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:addons-925003 Clientid:01:52:54:00:df:f9:b5}
	I1003 17:43:34.527972   13237 main.go:141] libmachine: domain addons-925003 has defined IP address 192.168.39.143 and MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:34.528172   13237 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/addons-925003/id_rsa Username:docker}
	I1003 17:43:34.608624   13237 ssh_runner.go:195] Run: systemctl --version
	I1003 17:43:34.642842   13237 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 17:43:34.947520   13237 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 17:43:34.954806   13237 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 17:43:34.954880   13237 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 17:43:34.975906   13237 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 17:43:34.975935   13237 start.go:495] detecting cgroup driver to use...
	I1003 17:43:34.975998   13237 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 17:43:34.994668   13237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 17:43:35.011368   13237 docker.go:218] disabling cri-docker service (if available) ...
	I1003 17:43:35.011443   13237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 17:43:35.029855   13237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 17:43:35.046717   13237 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 17:43:35.190820   13237 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 17:43:35.389827   13237 docker.go:234] disabling docker service ...
	I1003 17:43:35.389913   13237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 17:43:35.406097   13237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 17:43:35.421241   13237 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 17:43:35.576269   13237 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 17:43:35.716921   13237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 17:43:35.733423   13237 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 17:43:35.757133   13237 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 17:43:35.757210   13237 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 17:43:35.769789   13237 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1003 17:43:35.769871   13237 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 17:43:35.782683   13237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 17:43:35.795307   13237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 17:43:35.807897   13237 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 17:43:35.821297   13237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 17:43:35.834575   13237 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 17:43:35.856421   13237 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 17:43:35.869407   13237 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 17:43:35.880394   13237 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1003 17:43:35.880470   13237 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1003 17:43:35.900794   13237 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 17:43:35.913507   13237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 17:43:36.058775   13237 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 17:43:36.176830   13237 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 17:43:36.176931   13237 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 17:43:36.182066   13237 start.go:563] Will wait 60s for crictl version
	I1003 17:43:36.182140   13237 ssh_runner.go:195] Run: which crictl
	I1003 17:43:36.186178   13237 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1003 17:43:36.233545   13237 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1003 17:43:36.233700   13237 ssh_runner.go:195] Run: crio --version
	I1003 17:43:36.267007   13237 ssh_runner.go:195] Run: crio --version
	I1003 17:43:36.301481   13237 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1003 17:43:36.305579   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:36.305970   13237 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:df:f9:b5", ip: ""} in network mk-addons-925003: {Iface:virbr1 ExpiryTime:2025-10-03 18:43:29 +0000 UTC Type:0 Mac:52:54:00:df:f9:b5 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:addons-925003 Clientid:01:52:54:00:df:f9:b5}
	I1003 17:43:36.305995   13237 main.go:141] libmachine: domain addons-925003 has defined IP address 192.168.39.143 and MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:36.306200   13237 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1003 17:43:36.311020   13237 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 17:43:36.326099   13237 kubeadm.go:883] updating cluster {Name:addons-925003 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-925003 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.143 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 17:43:36.326227   13237 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 17:43:36.326286   13237 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 17:43:36.367440   13237 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1003 17:43:36.367516   13237 ssh_runner.go:195] Run: which lz4
	I1003 17:43:36.372235   13237 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1003 17:43:36.377305   13237 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1003 17:43:36.377342   13237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8656/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1003 17:43:37.862248   13237 crio.go:462] duration metric: took 1.490039719s to copy over tarball
	I1003 17:43:37.862317   13237 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1003 17:43:39.539262   13237 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.676911853s)
	I1003 17:43:39.539293   13237 crio.go:469] duration metric: took 1.677019009s to extract the tarball
	I1003 17:43:39.539317   13237 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1003 17:43:39.580756   13237 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 17:43:39.630046   13237 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 17:43:39.630074   13237 cache_images.go:85] Images are preloaded, skipping loading
	I1003 17:43:39.630082   13237 kubeadm.go:934] updating node { 192.168.39.143 8443 v1.34.1 crio true true} ...
	I1003 17:43:39.630168   13237 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-925003 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.143
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-925003 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 17:43:39.630257   13237 ssh_runner.go:195] Run: crio config
	I1003 17:43:39.679073   13237 cni.go:84] Creating CNI manager for ""
	I1003 17:43:39.679097   13237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1003 17:43:39.679597   13237 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 17:43:39.679629   13237 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.143 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-925003 NodeName:addons-925003 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.143"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.143 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 17:43:39.679741   13237 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.143
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-925003"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.143"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.143"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 17:43:39.679822   13237 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 17:43:39.692637   13237 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 17:43:39.692724   13237 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 17:43:39.704515   13237 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1003 17:43:39.726007   13237 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 17:43:39.747646   13237 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1003 17:43:39.769445   13237 ssh_runner.go:195] Run: grep 192.168.39.143	control-plane.minikube.internal$ /etc/hosts
	I1003 17:43:39.773842   13237 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.143	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 17:43:39.788822   13237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 17:43:39.934027   13237 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 17:43:39.954879   13237 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003 for IP: 192.168.39.143
	I1003 17:43:39.954904   13237 certs.go:195] generating shared ca certs ...
	I1003 17:43:39.954922   13237 certs.go:227] acquiring lock for ca certs: {Name:mk4284b70d600b181ba346e84ac85f956eee3efc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:43:39.955083   13237 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-8656/.minikube/ca.key
	I1003 17:43:40.061586   13237 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8656/.minikube/ca.crt ...
	I1003 17:43:40.061616   13237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8656/.minikube/ca.crt: {Name:mk7847477dc41a2310846435f13298a23ad61824 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:43:40.061774   13237 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8656/.minikube/ca.key ...
	I1003 17:43:40.061797   13237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8656/.minikube/ca.key: {Name:mkf3c1d54652c3eebf4601350c489ded8db3eef2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:43:40.061868   13237 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-8656/.minikube/proxy-client-ca.key
	I1003 17:43:40.368129   13237 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8656/.minikube/proxy-client-ca.crt ...
	I1003 17:43:40.368163   13237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8656/.minikube/proxy-client-ca.crt: {Name:mke6a8f8c2c9e825482d1fb610ec759e4c002a2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:43:40.368332   13237 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8656/.minikube/proxy-client-ca.key ...
	I1003 17:43:40.368343   13237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8656/.minikube/proxy-client-ca.key: {Name:mk9fc1f52edf7014fa1f80be0b5b799639ec770a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:43:40.368417   13237 certs.go:257] generating profile certs ...
	I1003 17:43:40.368475   13237 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/client.key
	I1003 17:43:40.368490   13237 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/client.crt with IP's: []
	I1003 17:43:40.691209   13237 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/client.crt ...
	I1003 17:43:40.691240   13237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/client.crt: {Name:mk336737fdf0b0101543eced3a646bfbbad63227 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:43:40.691405   13237 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/client.key ...
	I1003 17:43:40.691416   13237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/client.key: {Name:mk50afe844b8608ddf77e2823bf90c522b120eb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:43:40.691489   13237 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/apiserver.key.1662c976
	I1003 17:43:40.691507   13237 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/apiserver.crt.1662c976 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.143]
	I1003 17:43:40.916557   13237 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/apiserver.crt.1662c976 ...
	I1003 17:43:40.916588   13237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/apiserver.crt.1662c976: {Name:mk30765c80b8d75da5f8c13875eab78910da2247 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:43:40.916757   13237 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/apiserver.key.1662c976 ...
	I1003 17:43:40.916770   13237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/apiserver.key.1662c976: {Name:mkcf2f50baee8cec240f4f42ed9ac413a3773db6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:43:40.916846   13237 certs.go:382] copying /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/apiserver.crt.1662c976 -> /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/apiserver.crt
	I1003 17:43:40.916920   13237 certs.go:386] copying /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/apiserver.key.1662c976 -> /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/apiserver.key
	I1003 17:43:40.916967   13237 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/proxy-client.key
	I1003 17:43:40.916984   13237 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/proxy-client.crt with IP's: []
	I1003 17:43:40.930143   13237 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/proxy-client.crt ...
	I1003 17:43:40.930171   13237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/proxy-client.crt: {Name:mk5486457f5a344d797e1dee21009e08cd0b42cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:43:40.930333   13237 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/proxy-client.key ...
	I1003 17:43:40.930343   13237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/proxy-client.key: {Name:mk36a1768cab54a5d0748865ab0183a615ce6834 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:43:40.930496   13237 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8656/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 17:43:40.930526   13237 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8656/.minikube/certs/ca.pem (1078 bytes)
	I1003 17:43:40.930547   13237 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8656/.minikube/certs/cert.pem (1123 bytes)
	I1003 17:43:40.930568   13237 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8656/.minikube/certs/key.pem (1679 bytes)
	I1003 17:43:40.931163   13237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8656/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 17:43:40.963013   13237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8656/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 17:43:40.993772   13237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8656/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 17:43:41.023906   13237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8656/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 17:43:41.055157   13237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1003 17:43:41.085109   13237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 17:43:41.114362   13237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 17:43:41.145012   13237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 17:43:41.177244   13237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8656/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 17:43:41.208397   13237 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 17:43:41.230980   13237 ssh_runner.go:195] Run: openssl version
	I1003 17:43:41.237502   13237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 17:43:41.251991   13237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 17:43:41.257541   13237 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1003 17:43:41.257605   13237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 17:43:41.265439   13237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 17:43:41.279865   13237 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 17:43:41.285615   13237 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1003 17:43:41.285690   13237 kubeadm.go:400] StartCluster: {Name:addons-925003 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-925003 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.143 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 17:43:41.285750   13237 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 17:43:41.285832   13237 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 17:43:41.328359   13237 cri.go:89] found id: ""
	I1003 17:43:41.328422   13237 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 17:43:41.340955   13237 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 17:43:41.353277   13237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 17:43:41.366091   13237 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 17:43:41.366116   13237 kubeadm.go:157] found existing configuration files:
	
	I1003 17:43:41.366158   13237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 17:43:41.377644   13237 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 17:43:41.377705   13237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 17:43:41.390801   13237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 17:43:41.402735   13237 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 17:43:41.402820   13237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 17:43:41.414866   13237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 17:43:41.426410   13237 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 17:43:41.426472   13237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 17:43:41.438794   13237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 17:43:41.450899   13237 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 17:43:41.450964   13237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 17:43:41.463250   13237 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1003 17:43:41.514231   13237 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 17:43:41.514285   13237 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 17:43:41.625912   13237 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 17:43:41.626018   13237 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 17:43:41.626094   13237 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 17:43:41.637293   13237 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 17:43:41.939230   13237 out.go:252]   - Generating certificates and keys ...
	I1003 17:43:41.939367   13237 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 17:43:41.939505   13237 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 17:43:41.939612   13237 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 17:43:41.939700   13237 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1003 17:43:41.998846   13237 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1003 17:43:42.584045   13237 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1003 17:43:42.644625   13237 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1003 17:43:42.644800   13237 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-925003 localhost] and IPs [192.168.39.143 127.0.0.1 ::1]
	I1003 17:43:42.876762   13237 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1003 17:43:42.877136   13237 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-925003 localhost] and IPs [192.168.39.143 127.0.0.1 ::1]
	I1003 17:43:43.388319   13237 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 17:43:43.933661   13237 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 17:43:44.344247   13237 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1003 17:43:44.345583   13237 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 17:43:44.961473   13237 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 17:43:45.370927   13237 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 17:43:46.167413   13237 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 17:43:46.464732   13237 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 17:43:47.031159   13237 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 17:43:47.031365   13237 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 17:43:47.033767   13237 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 17:43:47.035398   13237 out.go:252]   - Booting up control plane ...
	I1003 17:43:47.035540   13237 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 17:43:47.035643   13237 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 17:43:47.037970   13237 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 17:43:47.055249   13237 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 17:43:47.055407   13237 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 17:43:47.064546   13237 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 17:43:47.064939   13237 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 17:43:47.065042   13237 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 17:43:47.237188   13237 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 17:43:47.237345   13237 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 17:43:48.237411   13237 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001405s
	I1003 17:43:48.240681   13237 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 17:43:48.240763   13237 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.143:8443/livez
	I1003 17:43:48.240892   13237 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 17:43:48.240999   13237 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 17:43:51.066076   13237 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.826733079s
	I1003 17:43:51.786022   13237 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.5478197s
	I1003 17:43:53.740920   13237 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.503706703s
	I1003 17:43:53.760803   13237 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1003 17:43:53.780930   13237 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1003 17:43:53.807514   13237 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1003 17:43:53.807727   13237 kubeadm.go:318] [mark-control-plane] Marking the node addons-925003 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1003 17:43:53.824702   13237 kubeadm.go:318] [bootstrap-token] Using token: juj3bz.ztbo5l6ugtrbu6cd
	I1003 17:43:53.826811   13237 out.go:252]   - Configuring RBAC rules ...
	I1003 17:43:53.826970   13237 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1003 17:43:53.836591   13237 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1003 17:43:53.853381   13237 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1003 17:43:53.857390   13237 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1003 17:43:53.861901   13237 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1003 17:43:53.867153   13237 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1003 17:43:54.151498   13237 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1003 17:43:54.597610   13237 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1003 17:43:55.156015   13237 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1003 17:43:55.156043   13237 kubeadm.go:318] 
	I1003 17:43:55.156148   13237 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1003 17:43:55.156171   13237 kubeadm.go:318] 
	I1003 17:43:55.156284   13237 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1003 17:43:55.156296   13237 kubeadm.go:318] 
	I1003 17:43:55.156326   13237 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1003 17:43:55.156417   13237 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1003 17:43:55.156501   13237 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1003 17:43:55.156510   13237 kubeadm.go:318] 
	I1003 17:43:55.156588   13237 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1003 17:43:55.156597   13237 kubeadm.go:318] 
	I1003 17:43:55.156646   13237 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1003 17:43:55.156661   13237 kubeadm.go:318] 
	I1003 17:43:55.156752   13237 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1003 17:43:55.156872   13237 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1003 17:43:55.157005   13237 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1003 17:43:55.157013   13237 kubeadm.go:318] 
	I1003 17:43:55.157119   13237 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1003 17:43:55.157239   13237 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1003 17:43:55.157259   13237 kubeadm.go:318] 
	I1003 17:43:55.157365   13237 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token juj3bz.ztbo5l6ugtrbu6cd \
	I1003 17:43:55.157457   13237 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:4bfe0d80a9ca5e8d78d8d84b967ba346e68b95ec3104468a9b7a9a35745deeeb \
	I1003 17:43:55.157476   13237 kubeadm.go:318] 	--control-plane 
	I1003 17:43:55.157482   13237 kubeadm.go:318] 
	I1003 17:43:55.157547   13237 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1003 17:43:55.157553   13237 kubeadm.go:318] 
	I1003 17:43:55.157624   13237 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token juj3bz.ztbo5l6ugtrbu6cd \
	I1003 17:43:55.157775   13237 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:4bfe0d80a9ca5e8d78d8d84b967ba346e68b95ec3104468a9b7a9a35745deeeb 
	I1003 17:43:55.159671   13237 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 17:43:55.159711   13237 cni.go:84] Creating CNI manager for ""
	I1003 17:43:55.159726   13237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1003 17:43:55.161639   13237 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1003 17:43:55.163137   13237 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1003 17:43:55.176861   13237 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1003 17:43:55.202061   13237 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1003 17:43:55.202133   13237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:43:55.202199   13237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-925003 minikube.k8s.io/updated_at=2025_10_03T17_43_55_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a43873c79fc22f8b1ccd29d3dfa635d392b09335 minikube.k8s.io/name=addons-925003 minikube.k8s.io/primary=true
	I1003 17:43:55.254022   13237 ops.go:34] apiserver oom_adj: -16
	I1003 17:43:55.360076   13237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:43:55.860325   13237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:43:56.360838   13237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:43:56.860536   13237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:43:57.360866   13237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:43:57.861031   13237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:43:58.360494   13237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:43:58.860646   13237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:43:59.360272   13237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 17:43:59.453440   13237 kubeadm.go:1113] duration metric: took 4.251366271s to wait for elevateKubeSystemPrivileges
	I1003 17:43:59.453489   13237 kubeadm.go:402] duration metric: took 18.167802576s to StartCluster
	I1003 17:43:59.453512   13237 settings.go:142] acquiring lock: {Name:mke9d2b3efcaa2fe43ef0f2a287704ef18b85ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:43:59.453683   13237 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21625-8656/kubeconfig
	I1003 17:43:59.454195   13237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8656/kubeconfig: {Name:mk3bf5476cb0b0966e4582f99de822e34e150667 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 17:43:59.454441   13237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1003 17:43:59.454480   13237 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.143 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 17:43:59.454534   13237 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1003 17:43:59.454666   13237 addons.go:69] Setting yakd=true in profile "addons-925003"
	I1003 17:43:59.454675   13237 addons.go:69] Setting inspektor-gadget=true in profile "addons-925003"
	I1003 17:43:59.454689   13237 addons.go:238] Setting addon yakd=true in "addons-925003"
	I1003 17:43:59.454689   13237 config.go:182] Loaded profile config "addons-925003": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 17:43:59.454701   13237 addons.go:238] Setting addon inspektor-gadget=true in "addons-925003"
	I1003 17:43:59.454721   13237 host.go:66] Checking if "addons-925003" exists ...
	I1003 17:43:59.454718   13237 addons.go:69] Setting default-storageclass=true in profile "addons-925003"
	I1003 17:43:59.454733   13237 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-925003"
	I1003 17:43:59.454743   13237 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-925003"
	I1003 17:43:59.454747   13237 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-925003"
	I1003 17:43:59.454743   13237 addons.go:69] Setting registry-creds=true in profile "addons-925003"
	I1003 17:43:59.454762   13237 host.go:66] Checking if "addons-925003" exists ...
	I1003 17:43:59.454771   13237 addons.go:69] Setting metrics-server=true in profile "addons-925003"
	I1003 17:43:59.454775   13237 addons.go:238] Setting addon registry-creds=true in "addons-925003"
	I1003 17:43:59.454800   13237 addons.go:238] Setting addon metrics-server=true in "addons-925003"
	I1003 17:43:59.454817   13237 host.go:66] Checking if "addons-925003" exists ...
	I1003 17:43:59.454852   13237 host.go:66] Checking if "addons-925003" exists ...
	I1003 17:43:59.455391   13237 addons.go:69] Setting ingress=true in profile "addons-925003"
	I1003 17:43:59.455433   13237 addons.go:238] Setting addon ingress=true in "addons-925003"
	I1003 17:43:59.455497   13237 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-925003"
	I1003 17:43:59.455499   13237 host.go:66] Checking if "addons-925003" exists ...
	I1003 17:43:59.455512   13237 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-925003"
	I1003 17:43:59.455531   13237 host.go:66] Checking if "addons-925003" exists ...
	I1003 17:43:59.455868   13237 addons.go:69] Setting storage-provisioner=true in profile "addons-925003"
	I1003 17:43:59.455894   13237 addons.go:69] Setting cloud-spanner=true in profile "addons-925003"
	I1003 17:43:59.455904   13237 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-925003"
	I1003 17:43:59.455916   13237 addons.go:238] Setting addon cloud-spanner=true in "addons-925003"
	I1003 17:43:59.455919   13237 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-925003"
	I1003 17:43:59.455940   13237 host.go:66] Checking if "addons-925003" exists ...
	I1003 17:43:59.455989   13237 addons.go:69] Setting ingress-dns=true in profile "addons-925003"
	I1003 17:43:59.456002   13237 addons.go:238] Setting addon ingress-dns=true in "addons-925003"
	I1003 17:43:59.456027   13237 host.go:66] Checking if "addons-925003" exists ...
	I1003 17:43:59.454762   13237 host.go:66] Checking if "addons-925003" exists ...
	I1003 17:43:59.456437   13237 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-925003"
	I1003 17:43:59.456491   13237 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-925003"
	I1003 17:43:59.456526   13237 host.go:66] Checking if "addons-925003" exists ...
	I1003 17:43:59.456828   13237 addons.go:69] Setting volcano=true in profile "addons-925003"
	I1003 17:43:59.456849   13237 addons.go:238] Setting addon volcano=true in "addons-925003"
	I1003 17:43:59.456869   13237 addons.go:69] Setting gcp-auth=true in profile "addons-925003"
	I1003 17:43:59.456880   13237 host.go:66] Checking if "addons-925003" exists ...
	I1003 17:43:59.456887   13237 mustload.go:65] Loading cluster: addons-925003
	I1003 17:43:59.456931   13237 addons.go:69] Setting registry=true in profile "addons-925003"
	I1003 17:43:59.456949   13237 addons.go:238] Setting addon registry=true in "addons-925003"
	I1003 17:43:59.456973   13237 host.go:66] Checking if "addons-925003" exists ...
	I1003 17:43:59.457071   13237 config.go:182] Loaded profile config "addons-925003": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 17:43:59.457439   13237 out.go:179] * Verifying Kubernetes components...
	I1003 17:43:59.457686   13237 addons.go:69] Setting volumesnapshots=true in profile "addons-925003"
	I1003 17:43:59.457712   13237 addons.go:238] Setting addon volumesnapshots=true in "addons-925003"
	I1003 17:43:59.457749   13237 host.go:66] Checking if "addons-925003" exists ...
	I1003 17:43:59.455896   13237 addons.go:238] Setting addon storage-provisioner=true in "addons-925003"
	I1003 17:43:59.457793   13237 host.go:66] Checking if "addons-925003" exists ...
	I1003 17:43:59.459089   13237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 17:43:59.463300   13237 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1003 17:43:59.463339   13237 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1003 17:43:59.463309   13237 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1003 17:43:59.464514   13237 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I1003 17:43:59.464521   13237 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1003 17:43:59.465131   13237 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1003 17:43:59.464657   13237 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1003 17:43:59.465233   13237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1003 17:43:59.464894   13237 addons.go:238] Setting addon default-storageclass=true in "addons-925003"
	I1003 17:43:59.465284   13237 host.go:66] Checking if "addons-925003" exists ...
	I1003 17:43:59.464905   13237 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-925003"
	I1003 17:43:59.465398   13237 host.go:66] Checking if "addons-925003" exists ...
	I1003 17:43:59.465532   13237 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1003 17:43:59.466029   13237 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1003 17:43:59.465542   13237 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1003 17:43:59.465545   13237 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	W1003 17:43:59.466145   13237 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1003 17:43:59.466306   13237 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1003 17:43:59.466325   13237 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1003 17:43:59.466526   13237 host.go:66] Checking if "addons-925003" exists ...
	I1003 17:43:59.467146   13237 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1003 17:43:59.467893   13237 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 17:43:59.467911   13237 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1003 17:43:59.467941   13237 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I1003 17:43:59.467912   13237 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1003 17:43:59.467941   13237 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1003 17:43:59.468068   13237 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1003 17:43:59.468134   13237 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1003 17:43:59.468970   13237 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1003 17:43:59.469388   13237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1003 17:43:59.469431   13237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1003 17:43:59.469708   13237 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1003 17:43:59.469764   13237 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 17:43:59.470119   13237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 17:43:59.470374   13237 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 17:43:59.470390   13237 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 17:43:59.470435   13237 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1003 17:43:59.470447   13237 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1003 17:43:59.470841   13237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1003 17:43:59.470450   13237 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1003 17:43:59.470477   13237 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1003 17:43:59.470987   13237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1003 17:43:59.471341   13237 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1003 17:43:59.472161   13237 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1003 17:43:59.472208   13237 out.go:179]   - Using image docker.io/registry:3.0.0
	I1003 17:43:59.472211   13237 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1003 17:43:59.473810   13237 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1003 17:43:59.473963   13237 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1003 17:43:59.473982   13237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1003 17:43:59.474135   13237 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1003 17:43:59.474150   13237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1003 17:43:59.474187   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:59.475547   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:59.476104   13237 out.go:179]   - Using image docker.io/busybox:stable
	I1003 17:43:59.476321   13237 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:df:f9:b5", ip: ""} in network mk-addons-925003: {Iface:virbr1 ExpiryTime:2025-10-03 18:43:29 +0000 UTC Type:0 Mac:52:54:00:df:f9:b5 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:addons-925003 Clientid:01:52:54:00:df:f9:b5}
	I1003 17:43:59.476808   13237 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1003 17:43:59.477128   13237 main.go:141] libmachine: domain addons-925003 has defined IP address 192.168.39.143 and MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:59.477440   13237 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:df:f9:b5", ip: ""} in network mk-addons-925003: {Iface:virbr1 ExpiryTime:2025-10-03 18:43:29 +0000 UTC Type:0 Mac:52:54:00:df:f9:b5 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:addons-925003 Clientid:01:52:54:00:df:f9:b5}
	I1003 17:43:59.477473   13237 main.go:141] libmachine: domain addons-925003 has defined IP address 192.168.39.143 and MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:59.477593   13237 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1003 17:43:59.477602   13237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1003 17:43:59.477982   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:59.478002   13237 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/addons-925003/id_rsa Username:docker}
	I1003 17:43:59.478628   13237 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/addons-925003/id_rsa Username:docker}
	I1003 17:43:59.479166   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:59.479725   13237 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1003 17:43:59.480318   13237 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:df:f9:b5", ip: ""} in network mk-addons-925003: {Iface:virbr1 ExpiryTime:2025-10-03 18:43:29 +0000 UTC Type:0 Mac:52:54:00:df:f9:b5 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:addons-925003 Clientid:01:52:54:00:df:f9:b5}
	I1003 17:43:59.480352   13237 main.go:141] libmachine: domain addons-925003 has defined IP address 192.168.39.143 and MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:59.481054   13237 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:df:f9:b5", ip: ""} in network mk-addons-925003: {Iface:virbr1 ExpiryTime:2025-10-03 18:43:29 +0000 UTC Type:0 Mac:52:54:00:df:f9:b5 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:addons-925003 Clientid:01:52:54:00:df:f9:b5}
	I1003 17:43:59.481095   13237 main.go:141] libmachine: domain addons-925003 has defined IP address 192.168.39.143 and MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:59.481119   13237 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/addons-925003/id_rsa Username:docker}
	I1003 17:43:59.482168   13237 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/addons-925003/id_rsa Username:docker}
	I1003 17:43:59.482274   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:59.482657   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:59.482675   13237 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1003 17:43:59.483559   13237 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:df:f9:b5", ip: ""} in network mk-addons-925003: {Iface:virbr1 ExpiryTime:2025-10-03 18:43:29 +0000 UTC Type:0 Mac:52:54:00:df:f9:b5 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:addons-925003 Clientid:01:52:54:00:df:f9:b5}
	I1003 17:43:59.483597   13237 main.go:141] libmachine: domain addons-925003 has defined IP address 192.168.39.143 and MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:59.483650   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:59.483740   13237 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:df:f9:b5", ip: ""} in network mk-addons-925003: {Iface:virbr1 ExpiryTime:2025-10-03 18:43:29 +0000 UTC Type:0 Mac:52:54:00:df:f9:b5 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:addons-925003 Clientid:01:52:54:00:df:f9:b5}
	I1003 17:43:59.483775   13237 main.go:141] libmachine: domain addons-925003 has defined IP address 192.168.39.143 and MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:59.484138   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:59.484294   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:59.484396   13237 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/addons-925003/id_rsa Username:docker}
	I1003 17:43:59.484650   13237 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/addons-925003/id_rsa Username:docker}
	I1003 17:43:59.485365   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:59.485366   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:59.485709   13237 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1003 17:43:59.485833   13237 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:df:f9:b5", ip: ""} in network mk-addons-925003: {Iface:virbr1 ExpiryTime:2025-10-03 18:43:29 +0000 UTC Type:0 Mac:52:54:00:df:f9:b5 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:addons-925003 Clientid:01:52:54:00:df:f9:b5}
	I1003 17:43:59.485858   13237 main.go:141] libmachine: domain addons-925003 has defined IP address 192.168.39.143 and MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:59.485924   13237 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:df:f9:b5", ip: ""} in network mk-addons-925003: {Iface:virbr1 ExpiryTime:2025-10-03 18:43:29 +0000 UTC Type:0 Mac:52:54:00:df:f9:b5 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:addons-925003 Clientid:01:52:54:00:df:f9:b5}
	I1003 17:43:59.485975   13237 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:df:f9:b5", ip: ""} in network mk-addons-925003: {Iface:virbr1 ExpiryTime:2025-10-03 18:43:29 +0000 UTC Type:0 Mac:52:54:00:df:f9:b5 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:addons-925003 Clientid:01:52:54:00:df:f9:b5}
	I1003 17:43:59.486022   13237 main.go:141] libmachine: domain addons-925003 has defined IP address 192.168.39.143 and MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:59.486066   13237 main.go:141] libmachine: domain addons-925003 has defined IP address 192.168.39.143 and MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:59.486298   13237 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/addons-925003/id_rsa Username:docker}
	I1003 17:43:59.486560   13237 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:df:f9:b5", ip: ""} in network mk-addons-925003: {Iface:virbr1 ExpiryTime:2025-10-03 18:43:29 +0000 UTC Type:0 Mac:52:54:00:df:f9:b5 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:addons-925003 Clientid:01:52:54:00:df:f9:b5}
	I1003 17:43:59.486590   13237 main.go:141] libmachine: domain addons-925003 has defined IP address 192.168.39.143 and MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:59.486719   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:59.486719   13237 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/addons-925003/id_rsa Username:docker}
	I1003 17:43:59.486846   13237 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/addons-925003/id_rsa Username:docker}
	I1003 17:43:59.486767   13237 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:df:f9:b5", ip: ""} in network mk-addons-925003: {Iface:virbr1 ExpiryTime:2025-10-03 18:43:29 +0000 UTC Type:0 Mac:52:54:00:df:f9:b5 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:addons-925003 Clientid:01:52:54:00:df:f9:b5}
	I1003 17:43:59.486964   13237 main.go:141] libmachine: domain addons-925003 has defined IP address 192.168.39.143 and MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:59.487217   13237 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/addons-925003/id_rsa Username:docker}
	I1003 17:43:59.487259   13237 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/addons-925003/id_rsa Username:docker}
	I1003 17:43:59.487664   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:59.487940   13237 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:df:f9:b5", ip: ""} in network mk-addons-925003: {Iface:virbr1 ExpiryTime:2025-10-03 18:43:29 +0000 UTC Type:0 Mac:52:54:00:df:f9:b5 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:addons-925003 Clientid:01:52:54:00:df:f9:b5}
	I1003 17:43:59.487984   13237 main.go:141] libmachine: domain addons-925003 has defined IP address 192.168.39.143 and MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:59.488111   13237 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:df:f9:b5", ip: ""} in network mk-addons-925003: {Iface:virbr1 ExpiryTime:2025-10-03 18:43:29 +0000 UTC Type:0 Mac:52:54:00:df:f9:b5 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:addons-925003 Clientid:01:52:54:00:df:f9:b5}
	I1003 17:43:59.488139   13237 main.go:141] libmachine: domain addons-925003 has defined IP address 192.168.39.143 and MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:59.488150   13237 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/addons-925003/id_rsa Username:docker}
	I1003 17:43:59.488393   13237 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/addons-925003/id_rsa Username:docker}
	I1003 17:43:59.488680   13237 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1003 17:43:59.488711   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:59.489090   13237 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:df:f9:b5", ip: ""} in network mk-addons-925003: {Iface:virbr1 ExpiryTime:2025-10-03 18:43:29 +0000 UTC Type:0 Mac:52:54:00:df:f9:b5 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:addons-925003 Clientid:01:52:54:00:df:f9:b5}
	I1003 17:43:59.489124   13237 main.go:141] libmachine: domain addons-925003 has defined IP address 192.168.39.143 and MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:59.489260   13237 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/addons-925003/id_rsa Username:docker}
	I1003 17:43:59.489877   13237 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1003 17:43:59.489894   13237 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1003 17:43:59.492310   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:59.492677   13237 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:df:f9:b5", ip: ""} in network mk-addons-925003: {Iface:virbr1 ExpiryTime:2025-10-03 18:43:29 +0000 UTC Type:0 Mac:52:54:00:df:f9:b5 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:addons-925003 Clientid:01:52:54:00:df:f9:b5}
	I1003 17:43:59.492705   13237 main.go:141] libmachine: domain addons-925003 has defined IP address 192.168.39.143 and MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:43:59.492873   13237 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/addons-925003/id_rsa Username:docker}
	W1003 17:43:59.662021   13237 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:52934->192.168.39.143:22: read: connection reset by peer
	I1003 17:43:59.662050   13237 retry.go:31] will retry after 344.708116ms: ssh: handshake failed: read tcp 192.168.39.1:52934->192.168.39.143:22: read: connection reset by peer
	W1003 17:43:59.695733   13237 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:52976->192.168.39.143:22: read: connection reset by peer
	I1003 17:43:59.695757   13237 retry.go:31] will retry after 190.417722ms: ssh: handshake failed: read tcp 192.168.39.1:52976->192.168.39.143:22: read: connection reset by peer
	I1003 17:44:00.072429   13237 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1003 17:44:00.072461   13237 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1003 17:44:00.136506   13237 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1003 17:44:00.136534   13237 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1003 17:44:00.159208   13237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1003 17:44:00.161766   13237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 17:44:00.296072   13237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1003 17:44:00.304650   13237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1003 17:44:00.317324   13237 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 17:44:00.317429   13237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1003 17:44:00.317837   13237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 17:44:00.320262   13237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1003 17:44:00.327293   13237 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1003 17:44:00.327313   13237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1003 17:44:00.328828   13237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1003 17:44:00.373659   13237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1003 17:44:00.381831   13237 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1003 17:44:00.381859   13237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1003 17:44:00.435558   13237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1003 17:44:00.492025   13237 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1003 17:44:00.492050   13237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1003 17:44:00.494351   13237 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1003 17:44:00.494373   13237 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1003 17:44:00.534232   13237 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1003 17:44:00.534257   13237 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1003 17:44:00.755940   13237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1003 17:44:00.781893   13237 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1003 17:44:00.781922   13237 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1003 17:44:00.796635   13237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1003 17:44:00.816353   13237 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1003 17:44:00.816380   13237 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1003 17:44:00.936582   13237 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1003 17:44:00.936636   13237 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1003 17:44:00.964544   13237 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1003 17:44:00.964581   13237 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1003 17:44:01.040632   13237 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1003 17:44:01.040656   13237 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1003 17:44:01.188500   13237 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1003 17:44:01.188527   13237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1003 17:44:01.201859   13237 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1003 17:44:01.201891   13237 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1003 17:44:01.311369   13237 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1003 17:44:01.311392   13237 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1003 17:44:01.537346   13237 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1003 17:44:01.537373   13237 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1003 17:44:01.582213   13237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1003 17:44:01.598185   13237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1003 17:44:01.759814   13237 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1003 17:44:01.759838   13237 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1003 17:44:01.852402   13237 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1003 17:44:01.852427   13237 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1003 17:44:02.325818   13237 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1003 17:44:02.325840   13237 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1003 17:44:02.493914   13237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.334663659s)
	I1003 17:44:02.493926   13237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.332119647s)
	I1003 17:44:02.551498   13237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (2.255393562s)
	I1003 17:44:02.662337   13237 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1003 17:44:02.662357   13237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1003 17:44:03.053089   13237 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1003 17:44:03.053117   13237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1003 17:44:03.477473   13237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1003 17:44:03.604762   13237 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1003 17:44:03.604837   13237 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1003 17:44:03.950739   13237 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1003 17:44:03.950761   13237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1003 17:44:04.307822   13237 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1003 17:44:04.307862   13237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1003 17:44:04.684066   13237 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1003 17:44:04.684092   13237 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1003 17:44:05.010098   13237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1003 17:44:06.760227   13237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.455538891s)
	I1003 17:44:06.760291   13237 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (6.442835374s)
	I1003 17:44:06.760321   13237 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1003 17:44:06.760351   13237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.440062181s)
	I1003 17:44:06.760227   13237 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.442854037s)
	I1003 17:44:06.760459   13237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.442598317s)
	I1003 17:44:06.761265   13237 node_ready.go:35] waiting up to 6m0s for node "addons-925003" to be "Ready" ...
	I1003 17:44:06.815476   13237 node_ready.go:49] node "addons-925003" is "Ready"
	I1003 17:44:06.815508   13237 node_ready.go:38] duration metric: took 54.209833ms for node "addons-925003" to be "Ready" ...
	I1003 17:44:06.815523   13237 api_server.go:52] waiting for apiserver process to appear ...
	I1003 17:44:06.815589   13237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 17:44:06.962536   13237 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1003 17:44:06.965657   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:44:06.966078   13237 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:df:f9:b5", ip: ""} in network mk-addons-925003: {Iface:virbr1 ExpiryTime:2025-10-03 18:43:29 +0000 UTC Type:0 Mac:52:54:00:df:f9:b5 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:addons-925003 Clientid:01:52:54:00:df:f9:b5}
	I1003 17:44:06.966107   13237 main.go:141] libmachine: domain addons-925003 has defined IP address 192.168.39.143 and MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:44:06.966254   13237 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/addons-925003/id_rsa Username:docker}
	I1003 17:44:07.295308   13237 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-925003" context rescaled to 1 replicas
	I1003 17:44:07.367862   13237 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1003 17:44:07.631045   13237 addons.go:238] Setting addon gcp-auth=true in "addons-925003"
	I1003 17:44:07.631102   13237 host.go:66] Checking if "addons-925003" exists ...
	I1003 17:44:07.633100   13237 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1003 17:44:07.635669   13237 main.go:141] libmachine: domain addons-925003 has defined MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:44:07.636192   13237 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:df:f9:b5", ip: ""} in network mk-addons-925003: {Iface:virbr1 ExpiryTime:2025-10-03 18:43:29 +0000 UTC Type:0 Mac:52:54:00:df:f9:b5 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:addons-925003 Clientid:01:52:54:00:df:f9:b5}
	I1003 17:44:07.636223   13237 main.go:141] libmachine: domain addons-925003 has defined IP address 192.168.39.143 and MAC address 52:54:00:df:f9:b5 in network mk-addons-925003
	I1003 17:44:07.636405   13237 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/addons-925003/id_rsa Username:docker}
	I1003 17:44:08.578836   13237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.24997522s)
	I1003 17:44:08.578874   13237 addons.go:479] Verifying addon ingress=true in "addons-925003"
	I1003 17:44:08.578948   13237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.205252635s)
	I1003 17:44:08.578992   13237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.143394729s)
	I1003 17:44:08.579031   13237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.823056455s)
	I1003 17:44:08.579057   13237 addons.go:479] Verifying addon registry=true in "addons-925003"
	I1003 17:44:08.579099   13237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.782438002s)
	W1003 17:44:08.579163   13237 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 17:44:08.579203   13237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.996961414s)
	I1003 17:44:08.579208   13237 retry.go:31] will retry after 202.05641ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 17:44:08.579223   13237 addons.go:479] Verifying addon metrics-server=true in "addons-925003"
	I1003 17:44:08.579275   13237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.981054868s)
	I1003 17:44:08.580481   13237 out.go:179] * Verifying ingress addon...
	I1003 17:44:08.581522   13237 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-925003 service yakd-dashboard -n yakd-dashboard
	
	I1003 17:44:08.581531   13237 out.go:179] * Verifying registry addon...
	I1003 17:44:08.583180   13237 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1003 17:44:08.583888   13237 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1003 17:44:08.667536   13237 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1003 17:44:08.667565   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:08.670148   13237 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1003 17:44:08.670169   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:08.782471   13237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1003 17:44:08.920790   13237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.44325497s)
	W1003 17:44:08.920849   13237 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1003 17:44:08.920888   13237 retry.go:31] will retry after 264.0907ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1003 17:44:09.128607   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:09.132140   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:09.185864   13237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1003 17:44:09.625852   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:09.626141   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:09.862515   13237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.85236014s)
	I1003 17:44:09.862572   13237 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-925003"
	I1003 17:44:09.862589   13237 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.229460689s)
	I1003 17:44:09.862529   13237 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.046918491s)
	I1003 17:44:09.862723   13237 api_server.go:72] duration metric: took 10.408204081s to wait for apiserver process to appear ...
	I1003 17:44:09.862734   13237 api_server.go:88] waiting for apiserver healthz status ...
	I1003 17:44:09.862754   13237 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I1003 17:44:09.864719   13237 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1003 17:44:09.864722   13237 out.go:179] * Verifying csi-hostpath-driver addon...
	I1003 17:44:09.866426   13237 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1003 17:44:09.867243   13237 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1003 17:44:09.867685   13237 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1003 17:44:09.867704   13237 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1003 17:44:09.892017   13237 api_server.go:279] https://192.168.39.143:8443/healthz returned 200:
	ok
	I1003 17:44:09.900005   13237 api_server.go:141] control plane version: v1.34.1
	I1003 17:44:09.900048   13237 api_server.go:131] duration metric: took 37.305517ms to wait for apiserver health ...
	I1003 17:44:09.900061   13237 system_pods.go:43] waiting for kube-system pods to appear ...
	I1003 17:44:09.900359   13237 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1003 17:44:09.900383   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:09.943482   13237 system_pods.go:59] 20 kube-system pods found
	I1003 17:44:09.943542   13237 system_pods.go:61] "amd-gpu-device-plugin-46299" [d98a37ad-4d58-4c10-9a64-b7971edb16a9] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1003 17:44:09.943554   13237 system_pods.go:61] "coredns-66bc5c9577-m9pxm" [98bb7719-6400-4f0b-b9f9-ab570693ebd6] Running
	I1003 17:44:09.943568   13237 system_pods.go:61] "coredns-66bc5c9577-zw5qb" [efa446e2-bba5-4ee3-af25-89e83d6ba26f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 17:44:09.943577   13237 system_pods.go:61] "csi-hostpath-attacher-0" [94e2632c-4a53-4046-96aa-17ea34b86fa8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1003 17:44:09.943588   13237 system_pods.go:61] "csi-hostpath-resizer-0" [7137bf87-f67e-436a-8a88-73d9e422ffa8] Pending
	I1003 17:44:09.943599   13237 system_pods.go:61] "csi-hostpathplugin-vdqcj" [b7b7fd1c-8e4f-46ae-bc67-dc23546b0faa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1003 17:44:09.943618   13237 system_pods.go:61] "etcd-addons-925003" [400967e1-837b-499c-86ac-b3a7c3752a57] Running
	I1003 17:44:09.943628   13237 system_pods.go:61] "kube-apiserver-addons-925003" [5744e7a7-c8d7-4413-8fe3-6b86e8e79ad5] Running
	I1003 17:44:09.943634   13237 system_pods.go:61] "kube-controller-manager-addons-925003" [dbd752f2-7043-486f-81a5-1363d4f5fd90] Running
	I1003 17:44:09.943642   13237 system_pods.go:61] "kube-ingress-dns-minikube" [006dc168-9a22-41fe-8b73-6eb53c20331c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1003 17:44:09.943650   13237 system_pods.go:61] "kube-proxy-qhl2n" [546d4057-9f5e-4c44-8bb8-c6d2c1eae5ec] Running
	I1003 17:44:09.943656   13237 system_pods.go:61] "kube-scheduler-addons-925003" [cfd549e9-ef73-42b2-93a9-39e6ae894971] Running
	I1003 17:44:09.943666   13237 system_pods.go:61] "metrics-server-85b7d694d7-jkgpr" [23388409-c8d5-41f0-a700-8d78be777b5e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1003 17:44:09.943674   13237 system_pods.go:61] "nvidia-device-plugin-daemonset-b2hkn" [9a68947b-5baf-4caf-8409-6d59793d7c62] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1003 17:44:09.943687   13237 system_pods.go:61] "registry-66898fdd98-w9j9n" [0381ad84-6981-475d-94b6-d8a0c3d4fe30] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1003 17:44:09.943700   13237 system_pods.go:61] "registry-creds-764b6fb674-k8dfp" [3d7185d2-74cc-4408-be0e-9e920b884cc6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1003 17:44:09.943717   13237 system_pods.go:61] "registry-proxy-s4dqm" [25650394-d79a-461d-834c-67479b4075e1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1003 17:44:09.943734   13237 system_pods.go:61] "snapshot-controller-7d9fbc56b8-zblnx" [0f302262-8a05-42cf-87e6-85cf25851d95] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1003 17:44:09.943746   13237 system_pods.go:61] "snapshot-controller-7d9fbc56b8-zsks9" [d149c316-5f92-4d29-a9d8-5588d7f6dabb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1003 17:44:09.943757   13237 system_pods.go:61] "storage-provisioner" [1a3111e9-eaf0-46ac-b2a3-8fe6e09fe8b1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1003 17:44:09.943775   13237 system_pods.go:74] duration metric: took 43.702504ms to wait for pod list to return data ...
	I1003 17:44:09.943805   13237 default_sa.go:34] waiting for default service account to be created ...
	I1003 17:44:09.958033   13237 default_sa.go:45] found service account: "default"
	I1003 17:44:09.958061   13237 default_sa.go:55] duration metric: took 14.249972ms for default service account to be created ...
	I1003 17:44:09.958069   13237 system_pods.go:116] waiting for k8s-apps to be running ...
	I1003 17:44:09.982763   13237 system_pods.go:86] 20 kube-system pods found
	I1003 17:44:09.982811   13237 system_pods.go:89] "amd-gpu-device-plugin-46299" [d98a37ad-4d58-4c10-9a64-b7971edb16a9] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1003 17:44:09.982823   13237 system_pods.go:89] "coredns-66bc5c9577-m9pxm" [98bb7719-6400-4f0b-b9f9-ab570693ebd6] Running
	I1003 17:44:09.982834   13237 system_pods.go:89] "coredns-66bc5c9577-zw5qb" [efa446e2-bba5-4ee3-af25-89e83d6ba26f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 17:44:09.982845   13237 system_pods.go:89] "csi-hostpath-attacher-0" [94e2632c-4a53-4046-96aa-17ea34b86fa8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1003 17:44:09.982855   13237 system_pods.go:89] "csi-hostpath-resizer-0" [7137bf87-f67e-436a-8a88-73d9e422ffa8] Pending
	I1003 17:44:09.982865   13237 system_pods.go:89] "csi-hostpathplugin-vdqcj" [b7b7fd1c-8e4f-46ae-bc67-dc23546b0faa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1003 17:44:09.982874   13237 system_pods.go:89] "etcd-addons-925003" [400967e1-837b-499c-86ac-b3a7c3752a57] Running
	I1003 17:44:09.982878   13237 system_pods.go:89] "kube-apiserver-addons-925003" [5744e7a7-c8d7-4413-8fe3-6b86e8e79ad5] Running
	I1003 17:44:09.982882   13237 system_pods.go:89] "kube-controller-manager-addons-925003" [dbd752f2-7043-486f-81a5-1363d4f5fd90] Running
	I1003 17:44:09.982889   13237 system_pods.go:89] "kube-ingress-dns-minikube" [006dc168-9a22-41fe-8b73-6eb53c20331c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1003 17:44:09.982895   13237 system_pods.go:89] "kube-proxy-qhl2n" [546d4057-9f5e-4c44-8bb8-c6d2c1eae5ec] Running
	I1003 17:44:09.982900   13237 system_pods.go:89] "kube-scheduler-addons-925003" [cfd549e9-ef73-42b2-93a9-39e6ae894971] Running
	I1003 17:44:09.982907   13237 system_pods.go:89] "metrics-server-85b7d694d7-jkgpr" [23388409-c8d5-41f0-a700-8d78be777b5e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1003 17:44:09.982914   13237 system_pods.go:89] "nvidia-device-plugin-daemonset-b2hkn" [9a68947b-5baf-4caf-8409-6d59793d7c62] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1003 17:44:09.982925   13237 system_pods.go:89] "registry-66898fdd98-w9j9n" [0381ad84-6981-475d-94b6-d8a0c3d4fe30] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1003 17:44:09.982936   13237 system_pods.go:89] "registry-creds-764b6fb674-k8dfp" [3d7185d2-74cc-4408-be0e-9e920b884cc6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1003 17:44:09.982947   13237 system_pods.go:89] "registry-proxy-s4dqm" [25650394-d79a-461d-834c-67479b4075e1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1003 17:44:09.982955   13237 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zblnx" [0f302262-8a05-42cf-87e6-85cf25851d95] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1003 17:44:09.982968   13237 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zsks9" [d149c316-5f92-4d29-a9d8-5588d7f6dabb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1003 17:44:09.982978   13237 system_pods.go:89] "storage-provisioner" [1a3111e9-eaf0-46ac-b2a3-8fe6e09fe8b1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1003 17:44:09.982991   13237 system_pods.go:126] duration metric: took 24.916525ms to wait for k8s-apps to be running ...
	I1003 17:44:09.983001   13237 system_svc.go:44] waiting for kubelet service to be running ....
	I1003 17:44:09.983048   13237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 17:44:10.075314   13237 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1003 17:44:10.075340   13237 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1003 17:44:10.092680   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:10.092868   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:10.206358   13237 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1003 17:44:10.206382   13237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1003 17:44:10.311103   13237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1003 17:44:10.374105   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:10.591731   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:10.592327   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:10.874372   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:11.092847   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:11.094352   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:11.372298   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:11.589854   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:11.592975   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:11.897513   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:12.105356   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:12.105444   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:12.330744   13237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.54822747s)
	I1003 17:44:12.330802   13237 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.347714971s)
	W1003 17:44:12.330823   13237 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 17:44:12.330832   13237 system_svc.go:56] duration metric: took 2.347825325s WaitForService to wait for kubelet
	I1003 17:44:12.330867   13237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.019728863s)
	I1003 17:44:12.330741   13237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.144819586s)
	I1003 17:44:12.330842   13237 kubeadm.go:586] duration metric: took 12.876324455s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 17:44:12.330911   13237 node_conditions.go:102] verifying NodePressure condition ...
	I1003 17:44:12.330869   13237 retry.go:31] will retry after 505.16149ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 17:44:12.332191   13237 addons.go:479] Verifying addon gcp-auth=true in "addons-925003"
	I1003 17:44:12.333847   13237 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1003 17:44:12.333923   13237 node_conditions.go:123] node cpu capacity is 2
	I1003 17:44:12.333934   13237 node_conditions.go:105] duration metric: took 3.01278ms to run NodePressure ...
	I1003 17:44:12.333986   13237 start.go:241] waiting for startup goroutines ...
	I1003 17:44:12.334118   13237 out.go:179] * Verifying gcp-auth addon...
	I1003 17:44:12.336356   13237 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1003 17:44:12.342587   13237 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1003 17:44:12.342611   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:12.371922   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:12.588887   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:12.592157   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:12.836362   13237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1003 17:44:12.842507   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:12.873478   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:13.090424   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:13.092269   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:13.343183   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:13.374078   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:13.594768   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:13.597898   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:13.841893   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:13.874134   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:14.088017   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:14.088226   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:14.303052   13237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.466648789s)
	W1003 17:44:14.303105   13237 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 17:44:14.303130   13237 retry.go:31] will retry after 629.162149ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 17:44:14.346164   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:14.454815   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:14.588021   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:14.588199   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:14.842683   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:14.872803   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:14.932731   13237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1003 17:44:15.090031   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:15.093078   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:15.342998   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:15.373155   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:15.589275   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:15.592943   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:15.843511   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:15.874462   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:16.026864   13237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.09408755s)
	W1003 17:44:16.026905   13237 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 17:44:16.026923   13237 retry.go:31] will retry after 608.822727ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 17:44:16.090037   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:16.091710   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:16.343207   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:16.374035   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:16.592430   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:16.593770   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:16.636929   13237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1003 17:44:16.840082   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:16.871667   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:17.088457   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:17.088697   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:17.341739   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:17.372873   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:17.589356   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:17.590539   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:17.841598   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:17.870582   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:17.890147   13237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.253179883s)
	W1003 17:44:17.890186   13237 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 17:44:17.890202   13237 retry.go:31] will retry after 818.224451ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 17:44:18.093146   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:18.094836   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:18.342760   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:18.374988   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:18.590496   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:18.592133   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:18.709428   13237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1003 17:44:18.841118   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:18.874558   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:19.090988   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:19.092458   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:19.341927   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:19.374396   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:19.589004   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:19.590200   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:19.858075   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:19.873397   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:19.894662   13237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.185194347s)
	W1003 17:44:19.894694   13237 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 17:44:19.894713   13237 retry.go:31] will retry after 1.165494787s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 17:44:20.087605   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:20.089560   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:20.709610   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:20.713009   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:20.713132   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:20.713296   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:20.840658   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:20.872959   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:21.061080   13237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1003 17:44:21.088501   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:21.090370   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:21.341414   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:21.372622   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:21.590523   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:21.590894   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:21.841040   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:21.871359   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:22.061200   13237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.000077306s)
	W1003 17:44:22.061239   13237 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 17:44:22.061255   13237 retry.go:31] will retry after 3.288007764s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 17:44:22.088536   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:22.090995   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:22.340612   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:22.371935   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:22.588316   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:22.590373   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:22.923938   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:22.936500   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:23.121187   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:23.121273   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:23.343159   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:23.376807   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:23.589861   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:23.592197   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:23.841970   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:23.873761   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:24.087450   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:24.091701   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:24.344296   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:24.372705   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:24.587729   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:24.590166   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:24.840573   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:24.870928   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:25.091011   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:25.092763   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:25.340958   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:25.350182   13237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1003 17:44:25.372472   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:25.586915   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:25.589619   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:25.840240   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:25.875636   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:26.089746   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:26.092753   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1003 17:44:26.215171   13237 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 17:44:26.215209   13237 retry.go:31] will retry after 5.613750594s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 17:44:26.343089   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:26.373113   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:26.588177   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:26.589249   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:26.842374   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:26.872065   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:27.493002   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:27.493241   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:27.493326   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:27.494284   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:27.586723   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:27.589693   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:27.843184   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:27.871711   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:28.088747   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:28.089081   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:28.342659   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:28.443020   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:28.587030   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:28.587619   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:28.840200   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:28.872479   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:29.087349   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:29.087655   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:29.343230   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:29.372281   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:29.588152   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:29.588946   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:29.840814   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:29.873660   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:30.088942   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:30.089500   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:30.339582   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:30.371831   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:30.589298   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:30.589851   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:30.841241   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:30.871237   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:31.087809   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:31.088530   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:31.343013   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:31.375398   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:31.587528   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:31.588400   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:31.829931   13237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1003 17:44:31.841306   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:31.872898   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:32.087873   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:32.089503   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:32.341247   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:32.429667   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1003 17:44:32.590193   13237 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 17:44:32.590230   13237 retry.go:31] will retry after 5.371714337s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 17:44:32.592230   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:32.592405   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:32.840379   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:32.872991   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:33.090443   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:33.092695   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:33.341552   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:33.377288   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:33.588830   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:33.588889   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:33.842349   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:33.872660   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:34.091414   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:34.094050   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:34.341914   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:34.372074   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:34.589657   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:34.590246   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:34.841943   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:34.874166   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:35.091369   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:35.091527   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:35.341729   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:35.372883   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:35.589051   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:35.591249   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:35.840387   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:35.871088   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:36.088750   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:36.089825   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:36.339768   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:36.371676   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:36.588289   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:36.589268   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:36.841335   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:36.872872   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:37.094150   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:37.094445   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:37.340367   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:37.371007   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:37.587867   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:37.588027   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:37.840233   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:37.871669   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:37.962621   13237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1003 17:44:38.088897   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:38.089162   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:38.340196   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:38.373552   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:38.588743   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:38.588900   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1003 17:44:38.674321   13237 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 17:44:38.674357   13237 retry.go:31] will retry after 12.201359973s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 17:44:38.840087   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:38.872016   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:39.087583   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:39.089224   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:39.352872   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:39.371961   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:39.587959   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:39.591934   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:39.840333   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:39.871756   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:40.089662   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:40.090483   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:40.340202   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:40.372418   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:40.590588   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:40.593374   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:40.841115   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:40.871928   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:41.087110   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:41.088172   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:41.341160   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:41.372673   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:41.733026   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:41.741747   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:41.840320   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:41.871734   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:42.086914   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:42.087004   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:42.339441   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:42.371192   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:42.587693   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:42.587767   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:42.841611   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:42.870776   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:43.088991   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:43.090630   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:43.343226   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:43.374557   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:43.588382   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:43.591442   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:43.842100   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:43.874719   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:44.090721   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:44.093593   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:44.339831   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:44.373346   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:44.588804   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:44.591616   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:44.839709   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:44.874170   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:45.088342   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:45.088559   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:45.348414   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:45.371745   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:45.738148   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:45.740847   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:45.841069   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:45.872208   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:46.088489   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:46.088801   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:46.342298   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:46.376713   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:46.597436   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:46.597520   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:46.840472   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:46.873099   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:47.089957   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:47.091427   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:47.340915   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:47.373897   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:47.587681   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:47.587927   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:47.841448   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:47.872266   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:48.090949   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:48.093707   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:48.341962   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:48.373757   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:48.587925   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:48.588005   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:48.864626   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:48.872732   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:49.088663   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:49.089702   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:49.342969   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:49.377450   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:49.589239   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:49.589424   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:49.840399   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:49.872914   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:50.087681   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:50.087681   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:50.339902   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:50.371510   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:50.589987   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:50.591466   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:50.840186   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:50.872046   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:50.875939   13237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1003 17:44:51.087943   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:51.093022   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:51.344656   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:51.377879   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:51.589979   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:51.594343   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1003 17:44:51.716498   13237 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 17:44:51.716531   13237 retry.go:31] will retry after 13.354393015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 17:44:51.840268   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:51.872650   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:52.087560   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:52.087820   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:52.339768   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:52.371392   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:52.586417   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:52.588184   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:52.840434   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:52.874515   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:53.089591   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:53.089935   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:53.342960   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:53.374315   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:53.588334   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:53.590688   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:53.841992   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:53.873297   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:54.088544   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:54.088981   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:54.341034   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:54.371840   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:54.588464   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:54.588654   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:54.844386   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:54.945623   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:55.086981   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:55.087692   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:55.340900   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:55.373073   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:55.588364   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:55.589055   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 17:44:55.840810   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:55.871295   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:56.086623   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:56.087696   13237 kapi.go:107] duration metric: took 47.503809628s to wait for kubernetes.io/minikube-addons=registry ...
	I1003 17:44:56.340797   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:56.372483   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:56.586916   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:56.843233   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:56.874565   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:57.087754   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:57.341162   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:57.372563   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:57.586990   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:57.841344   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:57.870930   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:58.087890   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:58.342250   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:58.375623   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:58.590978   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:58.842524   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:58.877086   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:59.090124   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:59.505367   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:59.507992   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:44:59.588173   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:44:59.851150   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:44:59.872270   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:00.088072   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:00.340652   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:00.372439   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:00.586798   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:00.842032   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:00.871856   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:01.088223   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:01.340976   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:01.372458   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:01.587872   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:01.842090   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:01.876540   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:02.088351   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:02.342138   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:02.375985   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:02.589860   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:02.841141   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:02.874648   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:03.090358   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:03.341073   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:03.374013   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:03.591931   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:03.840357   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:03.873246   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:04.093118   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:04.341422   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:04.373900   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:04.588298   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:04.841092   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:04.871356   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:05.071536   13237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1003 17:45:05.088110   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:05.343059   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:05.373131   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:05.589810   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:05.845037   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:05.878063   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1003 17:45:05.890415   13237 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 17:45:05.890446   13237 retry.go:31] will retry after 18.752668069s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 17:45:06.088062   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:06.341000   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:06.372018   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:06.588126   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:06.842274   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:06.870551   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:07.087410   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:07.339415   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:07.377962   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:07.590417   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:07.841948   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:07.875636   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:08.088958   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:08.340732   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:08.374442   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:08.587146   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:08.842718   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:08.874268   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:09.087543   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:09.341370   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:09.373718   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:09.589601   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:09.893527   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:09.894969   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:10.089848   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:10.342909   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:10.371035   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:10.592048   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:10.841174   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:10.871582   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:11.095580   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:11.342346   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:11.376763   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:11.588725   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:11.841150   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:11.874881   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:12.087528   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:12.339455   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:12.370802   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:12.589205   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:12.840529   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:12.874814   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:13.090725   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:13.344672   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:13.374706   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:13.589402   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:13.843334   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:13.871585   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:14.086832   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:14.340553   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:14.372141   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:14.655857   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:14.844878   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:14.944854   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:15.099457   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:15.339989   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:15.373646   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:15.590358   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:15.842150   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:15.875572   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:16.088449   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:16.435770   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:16.734661   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:16.736593   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:16.845428   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:16.873086   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:17.088210   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:17.342686   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:17.379847   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:17.587295   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:17.844519   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:17.872141   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:18.097627   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:18.339543   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:18.370751   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:18.587474   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:18.840254   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:18.872353   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:19.086145   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:19.342406   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:19.370950   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:19.587648   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:19.840983   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:19.872813   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:20.090307   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:20.343817   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:20.376495   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:20.590101   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:20.843048   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:20.874167   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:21.090137   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:21.346984   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:21.373487   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:21.595316   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:21.842546   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:21.870962   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:22.087890   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:22.342045   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:22.378571   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:22.589224   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:22.842510   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:22.872584   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:23.090128   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:23.410398   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:23.411761   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:23.590022   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:23.844831   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:23.946137   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:24.088445   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:24.344700   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:24.379065   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:24.587396   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:24.643524   13237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1003 17:45:24.846705   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:24.893718   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:25.088474   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:25.342225   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:25.372932   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:25.591309   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:25.715025   13237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.071455076s)
	W1003 17:45:25.715076   13237 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 17:45:25.715180   13237 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1003 17:45:25.841270   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:25.871099   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:26.093939   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:26.343187   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:26.377970   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:26.589344   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:26.840031   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:26.872189   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:27.088045   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:27.340483   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:27.375369   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:27.586621   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:27.841324   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:27.871469   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:28.087753   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:28.345296   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:28.444794   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:28.587267   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:28.840206   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:28.871597   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 17:45:29.087695   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:29.344455   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:29.379408   13237 kapi.go:107] duration metric: took 1m19.51216101s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1003 17:45:29.587246   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:29.841062   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:30.087297   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:30.340058   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:30.586846   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:30.840161   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:31.087130   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:31.341042   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:31.587504   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:31.840915   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:32.088190   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:32.340435   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:32.587311   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:32.839551   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:33.087594   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:33.340082   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:33.587963   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:33.840928   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:34.088611   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:34.341036   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:34.588145   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:34.840591   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:35.087703   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:35.340476   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:35.587035   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:35.840713   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:36.087630   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:36.340681   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:36.588249   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:36.840209   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:37.086680   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:37.339666   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:37.587405   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:37.840009   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:38.086209   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:38.339524   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:38.587040   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:38.840549   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:39.088764   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:39.339983   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:39.586694   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:39.840385   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:40.086796   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:40.340684   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:40.586763   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:40.840501   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:41.087432   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:41.339548   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:41.588387   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:41.842245   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:42.087087   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:42.340577   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:42.587511   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:42.840139   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:43.086743   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:43.340088   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:43.587217   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:43.842719   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:44.088520   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:44.340196   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:44.587343   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:44.839770   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:45.088559   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:45.340381   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:45.587040   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:45.841415   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:46.087839   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:46.341825   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:46.587828   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:46.840444   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:47.087488   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:47.339410   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:47.587144   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:47.839982   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:48.086794   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:48.340448   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:48.587192   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:48.841101   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:49.087278   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:49.339407   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:49.587277   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:49.839851   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:50.088230   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:50.341441   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:50.587018   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:50.840631   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:51.087462   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:51.339975   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:51.587513   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:51.841101   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:52.086572   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:52.339600   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:52.588597   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:52.840367   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:53.087112   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:53.340514   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:53.587319   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:53.840910   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:54.088144   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:54.341043   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:54.587660   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:54.839935   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:55.087746   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:55.341345   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:55.587285   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:55.840752   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:56.087574   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:56.340002   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:56.587772   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:56.839948   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:57.087839   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:57.383418   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:57.587622   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:57.840450   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:58.086692   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:58.341692   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:58.588369   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:58.840045   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:59.087137   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:59.340759   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:45:59.590358   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:45:59.839743   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:00.087274   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:00.341306   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:00.588058   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:00.840474   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:01.087579   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:01.339646   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:01.587668   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:01.840219   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:02.086915   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:02.431181   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:02.587648   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:02.839872   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:03.088286   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:03.339294   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:03.587564   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:03.840480   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:04.087958   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:04.340477   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:04.587707   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:04.840254   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:05.087012   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:05.341758   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:05.588063   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:05.840295   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:06.088833   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:06.340633   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:06.587919   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:06.840635   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:07.087440   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:07.340259   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:07.587289   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:07.840448   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:08.087823   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:08.340989   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:08.587317   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:08.839698   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:09.088178   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:09.340866   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:09.586734   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:09.840272   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:10.086993   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:10.340963   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:10.589310   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:10.840400   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:11.087637   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:11.339955   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:11.587640   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:11.839597   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:12.087356   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:12.340977   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:12.587932   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:12.840409   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:13.087375   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:13.339819   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:13.587686   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:13.840677   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:14.088044   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:14.344105   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:14.586731   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:14.840758   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:15.088613   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:15.340407   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:15.587436   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:15.841451   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:16.086803   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:16.340722   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:16.587951   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:16.840600   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:17.087768   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:17.340084   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:17.588117   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:17.840698   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:18.087472   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:18.339761   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:18.587920   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:18.840729   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:19.088262   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:19.340286   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:19.587301   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:19.840068   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:20.087413   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:20.339521   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:20.587379   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:20.839617   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:21.087482   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:21.339900   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:21.588973   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:21.843108   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:22.086886   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:22.341863   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:22.589089   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:22.840932   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:23.092414   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:23.345274   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:23.589636   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:23.844310   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:24.088608   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:24.341918   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:24.590703   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:24.840921   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:25.090501   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:25.341396   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:25.607127   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:25.841182   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:26.087118   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:26.342907   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:26.591473   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:26.839720   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:27.090172   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:27.341232   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:27.588543   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:27.843085   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:28.090700   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:28.341530   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:28.587872   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:28.840426   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:29.087447   13237 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 17:46:29.340351   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:29.587587   13237 kapi.go:107] duration metric: took 2m21.004407375s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1003 17:46:29.840627   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:30.340429   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:30.840372   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:31.342404   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:31.840504   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:32.343059   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:32.843486   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:33.340591   13237 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 17:46:33.841045   13237 kapi.go:107] duration metric: took 2m21.504681438s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1003 17:46:33.843012   13237 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-925003 cluster.
	I1003 17:46:33.844686   13237 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1003 17:46:33.846195   13237 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1003 17:46:33.847659   13237 out.go:179] * Enabled addons: amd-gpu-device-plugin, registry-creds, default-storageclass, nvidia-device-plugin, storage-provisioner, storage-provisioner-rancher, cloud-spanner, ingress-dns, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1003 17:46:33.848894   13237 addons.go:514] duration metric: took 2m34.394354666s for enable addons: enabled=[amd-gpu-device-plugin registry-creds default-storageclass nvidia-device-plugin storage-provisioner storage-provisioner-rancher cloud-spanner ingress-dns metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1003 17:46:33.848936   13237 start.go:246] waiting for cluster config update ...
	I1003 17:46:33.848955   13237 start.go:255] writing updated cluster config ...
	I1003 17:46:33.849206   13237 ssh_runner.go:195] Run: rm -f paused
	I1003 17:46:33.855965   13237 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 17:46:33.860457   13237 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-m9pxm" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 17:46:33.866071   13237 pod_ready.go:94] pod "coredns-66bc5c9577-m9pxm" is "Ready"
	I1003 17:46:33.866091   13237 pod_ready.go:86] duration metric: took 5.614052ms for pod "coredns-66bc5c9577-m9pxm" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 17:46:33.868743   13237 pod_ready.go:83] waiting for pod "etcd-addons-925003" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 17:46:33.873892   13237 pod_ready.go:94] pod "etcd-addons-925003" is "Ready"
	I1003 17:46:33.873910   13237 pod_ready.go:86] duration metric: took 5.15016ms for pod "etcd-addons-925003" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 17:46:33.876220   13237 pod_ready.go:83] waiting for pod "kube-apiserver-addons-925003" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 17:46:33.882030   13237 pod_ready.go:94] pod "kube-apiserver-addons-925003" is "Ready"
	I1003 17:46:33.882049   13237 pod_ready.go:86] duration metric: took 5.812247ms for pod "kube-apiserver-addons-925003" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 17:46:33.884171   13237 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-925003" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 17:46:34.260337   13237 pod_ready.go:94] pod "kube-controller-manager-addons-925003" is "Ready"
	I1003 17:46:34.260368   13237 pod_ready.go:86] duration metric: took 376.179181ms for pod "kube-controller-manager-addons-925003" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 17:46:34.461334   13237 pod_ready.go:83] waiting for pod "kube-proxy-qhl2n" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 17:46:34.860835   13237 pod_ready.go:94] pod "kube-proxy-qhl2n" is "Ready"
	I1003 17:46:34.860864   13237 pod_ready.go:86] duration metric: took 399.508067ms for pod "kube-proxy-qhl2n" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 17:46:35.060805   13237 pod_ready.go:83] waiting for pod "kube-scheduler-addons-925003" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 17:46:35.461026   13237 pod_ready.go:94] pod "kube-scheduler-addons-925003" is "Ready"
	I1003 17:46:35.461053   13237 pod_ready.go:86] duration metric: took 400.220304ms for pod "kube-scheduler-addons-925003" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 17:46:35.461067   13237 pod_ready.go:40] duration metric: took 1.605066141s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 17:46:35.508630   13237 start.go:623] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1003 17:46:35.510599   13237 out.go:179] * Done! kubectl is now configured to use "addons-925003" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 03 17:49:37 addons-925003 crio[820]: time="2025-10-03 17:49:37.722832252Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1fc9e7f9-7dc4-4f24-9d3c-85ac6a00fdd5 name=/runtime.v1.RuntimeService/Version
	Oct 03 17:49:37 addons-925003 crio[820]: time="2025-10-03 17:49:37.722958046Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1fc9e7f9-7dc4-4f24-9d3c-85ac6a00fdd5 name=/runtime.v1.RuntimeService/Version
	Oct 03 17:49:37 addons-925003 crio[820]: time="2025-10-03 17:49:37.725444264Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a57e0ecb-13c9-479c-815a-62a9ab3b8dab name=/runtime.v1.ImageService/ImageFsInfo
	Oct 03 17:49:37 addons-925003 crio[820]: time="2025-10-03 17:49:37.727794369Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759513777727755855,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598015,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a57e0ecb-13c9-479c-815a-62a9ab3b8dab name=/runtime.v1.ImageService/ImageFsInfo
	Oct 03 17:49:37 addons-925003 crio[820]: time="2025-10-03 17:49:37.728827954Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c35d3743-599c-4bbf-ba1f-635622dce02c name=/runtime.v1.RuntimeService/ListContainers
	Oct 03 17:49:37 addons-925003 crio[820]: time="2025-10-03 17:49:37.728988876Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c35d3743-599c-4bbf-ba1f-635622dce02c name=/runtime.v1.RuntimeService/ListContainers
	Oct 03 17:49:37 addons-925003 crio[820]: time="2025-10-03 17:49:37.729353210Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9ff0a480f23e70a35755e0dd14c3b9f2f3eb3013204346032802b5834beaa94f,PodSandboxId:8d4a02ca956cc5fe35d2f6477ef577a6bd5c40a1980674b9adae7b83c543da95,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1759513633785165728,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0f7233ef-095e-446e-9144-25bb30b59449,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d250b10d8d981c6738ddf4274f9db1853525c2bf48d8b21b9a2a29883c361834,PodSandboxId:0256b1bdbb065e75ff50d296ef9d48f773b3235d7813425261b2f467015c7b75,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759513599969482543,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 43d14f0c-4814-4e73-907a-448430154130,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9db17a4d95cf9390470f9b703a0a027584a332ecddb6b1b9196f059805b3a9b2,PodSandboxId:1855f343e351c109303dfa506f52cfad7903bb2363d39fefb06ef1d5996fc9fb,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759513588646184936,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-2vhv9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 783d9874-f0e8-405b-8ef6-33ce0c602a2a,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0ca60e47e83530adf81445bc3f330c95e5d6a52bce7bd6f073d0e3e593cc2ea3,PodSandboxId:fa85429a890491b1c7309183badb21b32956eca8da2d62ed1aa568cd30e72985,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Sta
te:CONTAINER_EXITED,CreatedAt:1759513513058900472,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-72455,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c86b58a4-e76c-49de-97e5-cc57e2930b83,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c4798d975ee7b540b36e214b78131c4bc15ea98e737730789b213cc544ecb48,PodSandboxId:edd753bb059834c2533e4ecae58af3ff5c0a841a96a56f47f198067886a41a9a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d1697145
49c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759513512928952944,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-zq6zr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 131b9e68-999e-4e5f-8005-d1191aaaf1f2,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00c1a5c12336fefe938aca87fa2ee9136fd7896c3324eb0aaedfd1532f0fde83,PodSandboxId:678cbb0635ea66992e4203644ee960160643c623aa9486dc5c2da99e5167f9c2,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:966
0a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759513510553866496,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-lncq9,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 134a63dc-4395-4735-b025-c7b2826ddd3d,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ab3b3b367cce1becb0e8d5118ab483dab7211d9dada9aa4c04568a3d83c92be,PodSandboxId:b80bdabe59011531d8f218d4c96865ca1b830d5ec45df0ca08ebcf0c99c1a43d,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759513490437124756,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006dc168-9a22-41fe-8b73-6eb53c20331c,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ba9f6aced2fe2e31d61f127863ffa3c546a82f1f51f7efc7f9b2f84e50112d9,PodSandboxId:84fa0165f9fe4e095390c992924b5bb17994df07d22a42e495976b8187ed104a,Metadata:&ContainerMetadata{Name:amd-gpu
-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759513470617997417,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-46299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d98a37ad-4d58-4c10-9a64-b7971edb16a9,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c590b00aa43a9f567215f5c9359338b7e236c3dbce6ac47c0ec613c354e87a09,PodSandboxId:83265a772d70aaa822019f44c0def0f4c0a1f811da346d4231d993b63e4a67df,
Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759513449017558094,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a3111e9-eaf0-46ac-b2a3-8fe6e09fe8b1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce0c970e75a886c0db90a50a0f2b1b2f154438c2f02e180e9e0c216b51a08758,PodSandboxId:b32b3592de08afa1779109aee90e3c122bacab1df0a69e61f88265d26556c211,Metadata:&Co
ntainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759513443185574029,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-m9pxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98bb7719-6400-4f0b-b9f9-ab570693ebd6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86530f71cc4817d6977b8512de0e341322df00d2b8fc24805e2e089ad6d1c2c7,PodSandboxId:ff72f94ceed65af01d80868679eeed8814648cf1dca2be56493d5bbd0fcc32b8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759513440521560378,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qhl2n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 546d4057-9f5e-4c44-8bb8-c6d2c1eae5ec,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccc8b8d6904eb18730f9557388608771affffab34783031c116dfbee78a8ad5a,PodSandboxId:99e86068e58bcf1946a41e0db89b63c63f25820a7f6fc99a832f175c1ed99790,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759513428837722131,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-925003,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb1c255c673afc14a009f927107411ef,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}
],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06905b9e5a21054268aad00c96765bce69aee31005fcb41e7f5eccd95091429,PodSandboxId:20c9f9cbc60a04971191c757bcbf1d5fd018cfd7837084fed4aadc0b2b41039b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759513428800784389,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-925003,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b9b35982fdd419a38dfffe8df14b16,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.c
ontainer.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb228ee59b10d57c340cc0dd3f48989cb5aa29fd246294c4459762194e06e8fb,PodSandboxId:7e8f50f06ed2c45413bfe2499d3ec169563dedbc399dd520617ff39a90f7c3c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759513428788803604,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-925003,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90a5ca8f94ad0
96513d5898357930f53,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d96d60a1f8a0f595c84e0c10c3fbf7637deed34bf1a4300d8ec47a79feca2864,PodSandboxId:4749cd5a7c801e62572b31daaca448709b9d34028d3e5cab1f7b4e1653b5fedd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759513428769791068,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-addons-925003,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 958bb0e9ffdef83dc50d6cc9fd6144a4,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c35d3743-599c-4bbf-ba1f-635622dce02c name=/runtime.v1.RuntimeService/ListContainers
	Oct 03 17:49:37 addons-925003 crio[820]: time="2025-10-03 17:49:37.762335961Z" level=debug msg="Received container exit code: 0, message: " file="oci/runtime_oci.go:670" id=4c7bd096-815e-401b-8f94-6d233ad93e27 name=/runtime.v1.RuntimeService/ExecSync
	Oct 03 17:49:37 addons-925003 crio[820]: time="2025-10-03 17:49:37.762755555Z" level=debug msg="Response: &ExecSyncResponse{Stdout:[FILTERED],Stderr:[],ExitCode:0,}" file="otel-collector/interceptors.go:74" id=4c7bd096-815e-401b-8f94-6d233ad93e27 name=/runtime.v1.RuntimeService/ExecSync
	Oct 03 17:49:37 addons-925003 crio[820]: time="2025-10-03 17:49:37.780708034Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6abc8e35-5352-4b09-93c2-c62bba1b2302 name=/runtime.v1.RuntimeService/Version
	Oct 03 17:49:37 addons-925003 crio[820]: time="2025-10-03 17:49:37.780896905Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6abc8e35-5352-4b09-93c2-c62bba1b2302 name=/runtime.v1.RuntimeService/Version
	Oct 03 17:49:37 addons-925003 crio[820]: time="2025-10-03 17:49:37.782443829Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c9d26d93-ada9-4834-82c0-0d0ce5e7b6d5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 03 17:49:37 addons-925003 crio[820]: time="2025-10-03 17:49:37.783742520Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759513777783690186,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598015,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c9d26d93-ada9-4834-82c0-0d0ce5e7b6d5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 03 17:49:37 addons-925003 crio[820]: time="2025-10-03 17:49:37.784822942Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=03ecd312-f63e-43cf-b30f-293b76a42bac name=/runtime.v1.RuntimeService/ListContainers
	Oct 03 17:49:37 addons-925003 crio[820]: time="2025-10-03 17:49:37.784886777Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=03ecd312-f63e-43cf-b30f-293b76a42bac name=/runtime.v1.RuntimeService/ListContainers
	Oct 03 17:49:37 addons-925003 crio[820]: time="2025-10-03 17:49:37.785274173Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9ff0a480f23e70a35755e0dd14c3b9f2f3eb3013204346032802b5834beaa94f,PodSandboxId:8d4a02ca956cc5fe35d2f6477ef577a6bd5c40a1980674b9adae7b83c543da95,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1759513633785165728,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0f7233ef-095e-446e-9144-25bb30b59449,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d250b10d8d981c6738ddf4274f9db1853525c2bf48d8b21b9a2a29883c361834,PodSandboxId:0256b1bdbb065e75ff50d296ef9d48f773b3235d7813425261b2f467015c7b75,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759513599969482543,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 43d14f0c-4814-4e73-907a-448430154130,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9db17a4d95cf9390470f9b703a0a027584a332ecddb6b1b9196f059805b3a9b2,PodSandboxId:1855f343e351c109303dfa506f52cfad7903bb2363d39fefb06ef1d5996fc9fb,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759513588646184936,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-2vhv9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 783d9874-f0e8-405b-8ef6-33ce0c602a2a,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0ca60e47e83530adf81445bc3f330c95e5d6a52bce7bd6f073d0e3e593cc2ea3,PodSandboxId:fa85429a890491b1c7309183badb21b32956eca8da2d62ed1aa568cd30e72985,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Sta
te:CONTAINER_EXITED,CreatedAt:1759513513058900472,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-72455,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c86b58a4-e76c-49de-97e5-cc57e2930b83,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c4798d975ee7b540b36e214b78131c4bc15ea98e737730789b213cc544ecb48,PodSandboxId:edd753bb059834c2533e4ecae58af3ff5c0a841a96a56f47f198067886a41a9a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d1697145
49c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759513512928952944,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-zq6zr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 131b9e68-999e-4e5f-8005-d1191aaaf1f2,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00c1a5c12336fefe938aca87fa2ee9136fd7896c3324eb0aaedfd1532f0fde83,PodSandboxId:678cbb0635ea66992e4203644ee960160643c623aa9486dc5c2da99e5167f9c2,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:966
0a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759513510553866496,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-lncq9,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 134a63dc-4395-4735-b025-c7b2826ddd3d,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ab3b3b367cce1becb0e8d5118ab483dab7211d9dada9aa4c04568a3d83c92be,PodSandboxId:b80bdabe59011531d8f218d4c96865ca1b830d5ec45df0ca08ebcf0c99c1a43d,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759513490437124756,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006dc168-9a22-41fe-8b73-6eb53c20331c,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ba9f6aced2fe2e31d61f127863ffa3c546a82f1f51f7efc7f9b2f84e50112d9,PodSandboxId:84fa0165f9fe4e095390c992924b5bb17994df07d22a42e495976b8187ed104a,Metadata:&ContainerMetadata{Name:amd-gpu
-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759513470617997417,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-46299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d98a37ad-4d58-4c10-9a64-b7971edb16a9,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c590b00aa43a9f567215f5c9359338b7e236c3dbce6ac47c0ec613c354e87a09,PodSandboxId:83265a772d70aaa822019f44c0def0f4c0a1f811da346d4231d993b63e4a67df,
Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759513449017558094,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a3111e9-eaf0-46ac-b2a3-8fe6e09fe8b1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce0c970e75a886c0db90a50a0f2b1b2f154438c2f02e180e9e0c216b51a08758,PodSandboxId:b32b3592de08afa1779109aee90e3c122bacab1df0a69e61f88265d26556c211,Metadata:&Co
ntainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759513443185574029,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-m9pxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98bb7719-6400-4f0b-b9f9-ab570693ebd6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86530f71cc4817d6977b8512de0e341322df00d2b8fc24805e2e089ad6d1c2c7,PodSandboxId:ff72f94ceed65af01d80868679eeed8814648cf1dca2be56493d5bbd0fcc32b8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759513440521560378,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qhl2n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 546d4057-9f5e-4c44-8bb8-c6d2c1eae5ec,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccc8b8d6904eb18730f9557388608771affffab34783031c116dfbee78a8ad5a,PodSandboxId:99e86068e58bcf1946a41e0db89b63c63f25820a7f6fc99a832f175c1ed99790,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759513428837722131,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-925003,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb1c255c673afc14a009f927107411ef,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}
],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06905b9e5a21054268aad00c96765bce69aee31005fcb41e7f5eccd95091429,PodSandboxId:20c9f9cbc60a04971191c757bcbf1d5fd018cfd7837084fed4aadc0b2b41039b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759513428800784389,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-925003,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b9b35982fdd419a38dfffe8df14b16,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.c
ontainer.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb228ee59b10d57c340cc0dd3f48989cb5aa29fd246294c4459762194e06e8fb,PodSandboxId:7e8f50f06ed2c45413bfe2499d3ec169563dedbc399dd520617ff39a90f7c3c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759513428788803604,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-925003,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90a5ca8f94ad0
96513d5898357930f53,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d96d60a1f8a0f595c84e0c10c3fbf7637deed34bf1a4300d8ec47a79feca2864,PodSandboxId:4749cd5a7c801e62572b31daaca448709b9d34028d3e5cab1f7b4e1653b5fedd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759513428769791068,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-addons-925003,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 958bb0e9ffdef83dc50d6cc9fd6144a4,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=03ecd312-f63e-43cf-b30f-293b76a42bac name=/runtime.v1.RuntimeService/ListContainers
	Oct 03 17:49:37 addons-925003 crio[820]: time="2025-10-03 17:49:37.822120238Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2b4ef242-9dc9-410e-914f-0ed2a05a5c4d name=/runtime.v1.RuntimeService/Version
	Oct 03 17:49:37 addons-925003 crio[820]: time="2025-10-03 17:49:37.822193500Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2b4ef242-9dc9-410e-914f-0ed2a05a5c4d name=/runtime.v1.RuntimeService/Version
	Oct 03 17:49:37 addons-925003 crio[820]: time="2025-10-03 17:49:37.824379725Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=11385f5e-fb3e-4c0e-affa-f200a5bbaf53 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 03 17:49:37 addons-925003 crio[820]: time="2025-10-03 17:49:37.825896443Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759513777825861426,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598015,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=11385f5e-fb3e-4c0e-affa-f200a5bbaf53 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 03 17:49:37 addons-925003 crio[820]: time="2025-10-03 17:49:37.829579049Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=02628a0e-d693-4575-81a5-8d307c244a98 name=/runtime.v1.RuntimeService/ListContainers
	Oct 03 17:49:37 addons-925003 crio[820]: time="2025-10-03 17:49:37.829755096Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=02628a0e-d693-4575-81a5-8d307c244a98 name=/runtime.v1.RuntimeService/ListContainers
	Oct 03 17:49:37 addons-925003 crio[820]: time="2025-10-03 17:49:37.831086098Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9ff0a480f23e70a35755e0dd14c3b9f2f3eb3013204346032802b5834beaa94f,PodSandboxId:8d4a02ca956cc5fe35d2f6477ef577a6bd5c40a1980674b9adae7b83c543da95,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1759513633785165728,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0f7233ef-095e-446e-9144-25bb30b59449,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d250b10d8d981c6738ddf4274f9db1853525c2bf48d8b21b9a2a29883c361834,PodSandboxId:0256b1bdbb065e75ff50d296ef9d48f773b3235d7813425261b2f467015c7b75,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759513599969482543,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 43d14f0c-4814-4e73-907a-448430154130,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9db17a4d95cf9390470f9b703a0a027584a332ecddb6b1b9196f059805b3a9b2,PodSandboxId:1855f343e351c109303dfa506f52cfad7903bb2363d39fefb06ef1d5996fc9fb,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759513588646184936,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-2vhv9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 783d9874-f0e8-405b-8ef6-33ce0c602a2a,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0ca60e47e83530adf81445bc3f330c95e5d6a52bce7bd6f073d0e3e593cc2ea3,PodSandboxId:fa85429a890491b1c7309183badb21b32956eca8da2d62ed1aa568cd30e72985,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Sta
te:CONTAINER_EXITED,CreatedAt:1759513513058900472,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-72455,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c86b58a4-e76c-49de-97e5-cc57e2930b83,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c4798d975ee7b540b36e214b78131c4bc15ea98e737730789b213cc544ecb48,PodSandboxId:edd753bb059834c2533e4ecae58af3ff5c0a841a96a56f47f198067886a41a9a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d1697145
49c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759513512928952944,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-zq6zr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 131b9e68-999e-4e5f-8005-d1191aaaf1f2,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00c1a5c12336fefe938aca87fa2ee9136fd7896c3324eb0aaedfd1532f0fde83,PodSandboxId:678cbb0635ea66992e4203644ee960160643c623aa9486dc5c2da99e5167f9c2,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:966
0a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759513510553866496,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-lncq9,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 134a63dc-4395-4735-b025-c7b2826ddd3d,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ab3b3b367cce1becb0e8d5118ab483dab7211d9dada9aa4c04568a3d83c92be,PodSandboxId:b80bdabe59011531d8f218d4c96865ca1b830d5ec45df0ca08ebcf0c99c1a43d,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759513490437124756,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006dc168-9a22-41fe-8b73-6eb53c20331c,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ba9f6aced2fe2e31d61f127863ffa3c546a82f1f51f7efc7f9b2f84e50112d9,PodSandboxId:84fa0165f9fe4e095390c992924b5bb17994df07d22a42e495976b8187ed104a,Metadata:&ContainerMetadata{Name:amd-gpu
-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759513470617997417,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-46299,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d98a37ad-4d58-4c10-9a64-b7971edb16a9,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c590b00aa43a9f567215f5c9359338b7e236c3dbce6ac47c0ec613c354e87a09,PodSandboxId:83265a772d70aaa822019f44c0def0f4c0a1f811da346d4231d993b63e4a67df,
Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759513449017558094,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a3111e9-eaf0-46ac-b2a3-8fe6e09fe8b1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce0c970e75a886c0db90a50a0f2b1b2f154438c2f02e180e9e0c216b51a08758,PodSandboxId:b32b3592de08afa1779109aee90e3c122bacab1df0a69e61f88265d26556c211,Metadata:&Co
ntainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759513443185574029,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-m9pxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98bb7719-6400-4f0b-b9f9-ab570693ebd6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86530f71cc4817d6977b8512de0e341322df00d2b8fc24805e2e089ad6d1c2c7,PodSandboxId:ff72f94ceed65af01d80868679eeed8814648cf1dca2be56493d5bbd0fcc32b8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759513440521560378,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qhl2n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 546d4057-9f5e-4c44-8bb8-c6d2c1eae5ec,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccc8b8d6904eb18730f9557388608771affffab34783031c116dfbee78a8ad5a,PodSandboxId:99e86068e58bcf1946a41e0db89b63c63f25820a7f6fc99a832f175c1ed99790,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759513428837722131,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-925003,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb1c255c673afc14a009f927107411ef,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}
],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06905b9e5a21054268aad00c96765bce69aee31005fcb41e7f5eccd95091429,PodSandboxId:20c9f9cbc60a04971191c757bcbf1d5fd018cfd7837084fed4aadc0b2b41039b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759513428800784389,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-925003,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b9b35982fdd419a38dfffe8df14b16,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.c
ontainer.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb228ee59b10d57c340cc0dd3f48989cb5aa29fd246294c4459762194e06e8fb,PodSandboxId:7e8f50f06ed2c45413bfe2499d3ec169563dedbc399dd520617ff39a90f7c3c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759513428788803604,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-925003,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90a5ca8f94ad0
96513d5898357930f53,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d96d60a1f8a0f595c84e0c10c3fbf7637deed34bf1a4300d8ec47a79feca2864,PodSandboxId:4749cd5a7c801e62572b31daaca448709b9d34028d3e5cab1f7b4e1653b5fedd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759513428769791068,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-addons-925003,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 958bb0e9ffdef83dc50d6cc9fd6144a4,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=02628a0e-d693-4575-81a5-8d307c244a98 name=/runtime.v1.RuntimeService/ListContainers
	Oct 03 17:49:37 addons-925003 crio[820]: time="2025-10-03 17:49:37.832882242Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.list.v2+json\"" file="docker/docker_client.go:964"
	Oct 03 17:49:37 addons-925003 crio[820]: time="2025-10-03 17:49:37.833311702Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" file="docker/docker_client.go:631"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9ff0a480f23e7       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                              2 minutes ago       Running             nginx                     0                   8d4a02ca956cc       nginx
	d250b10d8d981       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   0256b1bdbb065       busybox
	9db17a4d95cf9       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             3 minutes ago       Running             controller                0                   1855f343e351c       ingress-nginx-controller-9cc49f96f-2vhv9
	0ca60e47e8353       8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65                                                             4 minutes ago       Exited              patch                     1                   fa85429a89049       ingress-nginx-admission-patch-72455
	7c4798d975ee7       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   4 minutes ago       Exited              create                    0                   edd753bb05983       ingress-nginx-admission-create-zq6zr
	00c1a5c12336f       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5            4 minutes ago       Running             gadget                    0                   678cbb0635ea6       gadget-lncq9
	7ab3b3b367cce       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   b80bdabe59011       kube-ingress-dns-minikube
	0ba9f6aced2fe       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     5 minutes ago       Running             amd-gpu-device-plugin     0                   84fa0165f9fe4       amd-gpu-device-plugin-46299
	c590b00aa43a9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   83265a772d70a       storage-provisioner
	ce0c970e75a88       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             5 minutes ago       Running             coredns                   0                   b32b3592de08a       coredns-66bc5c9577-m9pxm
	86530f71cc481       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             5 minutes ago       Running             kube-proxy                0                   ff72f94ceed65       kube-proxy-qhl2n
	ccc8b8d6904eb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             5 minutes ago       Running             etcd                      0                   99e86068e58bc       etcd-addons-925003
	f06905b9e5a21       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             5 minutes ago       Running             kube-scheduler            0                   20c9f9cbc60a0       kube-scheduler-addons-925003
	eb228ee59b10d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             5 minutes ago       Running             kube-apiserver            0                   7e8f50f06ed2c       kube-apiserver-addons-925003
	d96d60a1f8a0f       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             5 minutes ago       Running             kube-controller-manager   0                   4749cd5a7c801       kube-controller-manager-addons-925003
	
	
	==> coredns [ce0c970e75a886c0db90a50a0f2b1b2f154438c2f02e180e9e0c216b51a08758] <==
	[INFO] 10.244.0.8:38169 - 48234 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000181219s
	[INFO] 10.244.0.8:38169 - 10649 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000088014s
	[INFO] 10.244.0.8:38169 - 8595 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000071806s
	[INFO] 10.244.0.8:38169 - 30607 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000287826s
	[INFO] 10.244.0.8:38169 - 34222 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000102959s
	[INFO] 10.244.0.8:38169 - 36269 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.0001786s
	[INFO] 10.244.0.8:38169 - 42319 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000133074s
	[INFO] 10.244.0.8:44796 - 38488 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000175065s
	[INFO] 10.244.0.8:44796 - 38254 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000366708s
	[INFO] 10.244.0.8:56675 - 48929 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000108426s
	[INFO] 10.244.0.8:56675 - 48641 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00019302s
	[INFO] 10.244.0.8:46811 - 56210 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00010127s
	[INFO] 10.244.0.8:46811 - 55930 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000280741s
	[INFO] 10.244.0.8:55858 - 43993 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000122518s
	[INFO] 10.244.0.8:55858 - 43587 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000294322s
	[INFO] 10.244.0.23:59810 - 59267 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000600706s
	[INFO] 10.244.0.23:40650 - 42117 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001062857s
	[INFO] 10.244.0.23:36782 - 61423 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000116971s
	[INFO] 10.244.0.23:41173 - 44763 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000120853s
	[INFO] 10.244.0.23:39860 - 1698 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000109476s
	[INFO] 10.244.0.23:59395 - 42096 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000626573s
	[INFO] 10.244.0.23:49272 - 3668 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001420662s
	[INFO] 10.244.0.23:60024 - 1788 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 344 0.001470019s
	[INFO] 10.244.0.27:50066 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000662438s
	[INFO] 10.244.0.27:50273 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000186138s
	
	
	==> describe nodes <==
	Name:               addons-925003
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-925003
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a43873c79fc22f8b1ccd29d3dfa635d392b09335
	                    minikube.k8s.io/name=addons-925003
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_03T17_43_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-925003
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 03 Oct 2025 17:43:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-925003
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 03 Oct 2025 17:49:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 03 Oct 2025 17:47:59 +0000   Fri, 03 Oct 2025 17:43:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 03 Oct 2025 17:47:59 +0000   Fri, 03 Oct 2025 17:43:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 03 Oct 2025 17:47:59 +0000   Fri, 03 Oct 2025 17:43:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 03 Oct 2025 17:47:59 +0000   Fri, 03 Oct 2025 17:43:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.143
	  Hostname:    addons-925003
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 4c28fc726c074f6ba9beeecabdcc7e06
	  System UUID:                4c28fc72-6c07-4f6b-a9be-eecabdcc7e06
	  Boot ID:                    a97d3cd9-3e1a-4295-a9a4-6fbecc32e6f0
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	  default                     hello-world-app-5d498dc89-l2ncl             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  gadget                      gadget-lncq9                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m31s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-2vhv9    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m30s
	  kube-system                 amd-gpu-device-plugin-46299                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m36s
	  kube-system                 coredns-66bc5c9577-m9pxm                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m38s
	  kube-system                 etcd-addons-925003                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m44s
	  kube-system                 kube-apiserver-addons-925003                250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m46s
	  kube-system                 kube-controller-manager-addons-925003       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m44s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m33s
	  kube-system                 kube-proxy-qhl2n                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m39s
	  kube-system                 kube-scheduler-addons-925003                100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m44s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m37s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m51s (x8 over 5m51s)  kubelet          Node addons-925003 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m51s (x8 over 5m51s)  kubelet          Node addons-925003 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m51s (x7 over 5m51s)  kubelet          Node addons-925003 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m44s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m44s                  kubelet          Node addons-925003 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m44s                  kubelet          Node addons-925003 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m44s                  kubelet          Node addons-925003 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m43s                  kubelet          Node addons-925003 status is now: NodeReady
	  Normal  RegisteredNode           5m40s                  node-controller  Node addons-925003 event: Registered Node addons-925003 in Controller
	
	
	==> dmesg <==
	[ +14.299878] kauditd_printk_skb: 395 callbacks suppressed
	[  +9.176137] kauditd_printk_skb: 20 callbacks suppressed
	[ +13.424889] kauditd_printk_skb: 32 callbacks suppressed
	[  +7.022393] kauditd_printk_skb: 20 callbacks suppressed
	[Oct 3 17:45] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.228083] kauditd_printk_skb: 11 callbacks suppressed
	[  +0.960770] kauditd_printk_skb: 102 callbacks suppressed
	[  +4.708625] kauditd_printk_skb: 90 callbacks suppressed
	[  +3.416782] kauditd_printk_skb: 91 callbacks suppressed
	[Oct 3 17:46] kauditd_printk_skb: 20 callbacks suppressed
	[  +0.000048] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.708767] kauditd_printk_skb: 53 callbacks suppressed
	[  +3.539371] kauditd_printk_skb: 47 callbacks suppressed
	[  +9.491619] kauditd_printk_skb: 17 callbacks suppressed
	[  +6.011411] kauditd_printk_skb: 22 callbacks suppressed
	[Oct 3 17:47] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.000029] kauditd_printk_skb: 120 callbacks suppressed
	[  +0.754333] kauditd_printk_skb: 209 callbacks suppressed
	[  +6.248662] kauditd_printk_skb: 83 callbacks suppressed
	[  +2.377246] kauditd_printk_skb: 81 callbacks suppressed
	[  +6.846026] kauditd_printk_skb: 58 callbacks suppressed
	[  +8.160302] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.121698] kauditd_printk_skb: 46 callbacks suppressed
	[  +4.557069] kauditd_printk_skb: 25 callbacks suppressed
	[Oct 3 17:49] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [ccc8b8d6904eb18730f9557388608771affffab34783031c116dfbee78a8ad5a] <==
	{"level":"warn","ts":"2025-10-03T17:45:16.725856Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-03T17:45:16.367653Z","time spent":"357.831193ms","remote":"127.0.0.1:60102","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-10-03T17:45:16.725425Z","caller":"traceutil/trace.go:172","msg":"trace[1503483232] transaction","detail":"{read_only:false; response_revision:1075; number_of_response:1; }","duration":"360.217474ms","start":"2025-10-03T17:45:16.365195Z","end":"2025-10-03T17:45:16.725412Z","steps":["trace[1503483232] 'process raft request'  (duration: 359.868436ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-03T17:45:16.727971Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-03T17:45:16.365171Z","time spent":"362.012229ms","remote":"127.0.0.1:60176","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4479,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/jobs/ingress-nginx/ingress-nginx-admission-create\" mod_revision:740 > success:<request_put:<key:\"/registry/jobs/ingress-nginx/ingress-nginx-admission-create\" value_size:4412 >> failure:<request_range:<key:\"/registry/jobs/ingress-nginx/ingress-nginx-admission-create\" > >"}
	{"level":"info","ts":"2025-10-03T17:45:57.376245Z","caller":"traceutil/trace.go:172","msg":"trace[2100526398] transaction","detail":"{read_only:false; response_revision:1216; number_of_response:1; }","duration":"138.372137ms","start":"2025-10-03T17:45:57.237831Z","end":"2025-10-03T17:45:57.376203Z","steps":["trace[2100526398] 'process raft request'  (duration: 138.177601ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-03T17:46:25.596516Z","caller":"traceutil/trace.go:172","msg":"trace[1596008529] linearizableReadLoop","detail":"{readStateIndex:1303; appliedIndex:1303; }","duration":"197.344514ms","start":"2025-10-03T17:46:25.399152Z","end":"2025-10-03T17:46:25.596496Z","steps":["trace[1596008529] 'read index received'  (duration: 197.338753ms)","trace[1596008529] 'applied index is now lower than readState.Index'  (duration: 4.811µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-03T17:46:25.596716Z","caller":"traceutil/trace.go:172","msg":"trace[163686689] transaction","detail":"{read_only:false; response_revision:1255; number_of_response:1; }","duration":"235.504713ms","start":"2025-10-03T17:46:25.361200Z","end":"2025-10-03T17:46:25.596705Z","steps":["trace[163686689] 'process raft request'  (duration: 235.392824ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-03T17:46:25.596752Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"197.573371ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-03T17:46:25.596808Z","caller":"traceutil/trace.go:172","msg":"trace[1285318172] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1255; }","duration":"197.667575ms","start":"2025-10-03T17:46:25.399133Z","end":"2025-10-03T17:46:25.596801Z","steps":["trace[1285318172] 'agreement among raft nodes before linearized reading'  (duration: 197.552827ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-03T17:46:32.649721Z","caller":"traceutil/trace.go:172","msg":"trace[1094692462] transaction","detail":"{read_only:false; response_revision:1278; number_of_response:1; }","duration":"205.971884ms","start":"2025-10-03T17:46:32.443734Z","end":"2025-10-03T17:46:32.649706Z","steps":["trace[1094692462] 'process raft request'  (duration: 205.795529ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-03T17:46:37.975138Z","caller":"traceutil/trace.go:172","msg":"trace[1863754475] transaction","detail":"{read_only:false; response_revision:1310; number_of_response:1; }","duration":"144.056784ms","start":"2025-10-03T17:46:37.831060Z","end":"2025-10-03T17:46:37.975117Z","steps":["trace[1863754475] 'process raft request'  (duration: 143.952165ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-03T17:47:01.067296Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"140.698751ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-03T17:47:01.067454Z","caller":"traceutil/trace.go:172","msg":"trace[398806799] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1459; }","duration":"140.909161ms","start":"2025-10-03T17:47:00.926529Z","end":"2025-10-03T17:47:01.067438Z","steps":["trace[398806799] 'range keys from in-memory index tree'  (duration: 139.371148ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-03T17:47:01.067665Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"135.875037ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:metrics-server\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-03T17:47:01.067699Z","caller":"traceutil/trace.go:172","msg":"trace[1316352099] range","detail":"{range_begin:/registry/clusterrolebindings/system:metrics-server; range_end:; response_count:0; response_revision:1459; }","duration":"135.911867ms","start":"2025-10-03T17:47:00.931774Z","end":"2025-10-03T17:47:01.067686Z","steps":["trace[1316352099] 'range keys from in-memory index tree'  (duration: 135.798326ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-03T17:47:01.067843Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.503361ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-03T17:47:01.067861Z","caller":"traceutil/trace.go:172","msg":"trace[895663187] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1459; }","duration":"113.52239ms","start":"2025-10-03T17:47:00.954333Z","end":"2025-10-03T17:47:01.067856Z","steps":["trace[895663187] 'range keys from in-memory index tree'  (duration: 113.430564ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-03T17:47:01.068058Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"105.713443ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2025-10-03T17:47:01.068093Z","caller":"traceutil/trace.go:172","msg":"trace[983034576] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1459; }","duration":"105.750529ms","start":"2025-10-03T17:47:00.962337Z","end":"2025-10-03T17:47:01.068088Z","steps":["trace[983034576] 'range keys from in-memory index tree'  (duration: 105.577726ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-03T17:47:08.236196Z","caller":"traceutil/trace.go:172","msg":"trace[1970172222] transaction","detail":"{read_only:false; response_revision:1512; number_of_response:1; }","duration":"110.499887ms","start":"2025-10-03T17:47:08.125684Z","end":"2025-10-03T17:47:08.236184Z","steps":["trace[1970172222] 'process raft request'  (duration: 110.386073ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-03T17:47:33.404547Z","caller":"traceutil/trace.go:172","msg":"trace[494265224] transaction","detail":"{read_only:false; response_revision:1727; number_of_response:1; }","duration":"228.343522ms","start":"2025-10-03T17:47:33.176191Z","end":"2025-10-03T17:47:33.404535Z","steps":["trace[494265224] 'process raft request'  (duration: 228.240031ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-03T17:47:33.404974Z","caller":"traceutil/trace.go:172","msg":"trace[187141389] linearizableReadLoop","detail":"{readStateIndex:1798; appliedIndex:1798; }","duration":"202.122049ms","start":"2025-10-03T17:47:33.202242Z","end":"2025-10-03T17:47:33.404364Z","steps":["trace[187141389] 'read index received'  (duration: 202.115558ms)","trace[187141389] 'applied index is now lower than readState.Index'  (duration: 5.357µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-03T17:47:33.405078Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"202.844196ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-03T17:47:33.405129Z","caller":"traceutil/trace.go:172","msg":"trace[268122743] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1727; }","duration":"202.914579ms","start":"2025-10-03T17:47:33.202207Z","end":"2025-10-03T17:47:33.405122Z","steps":["trace[268122743] 'agreement among raft nodes before linearized reading'  (duration: 202.823339ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-03T17:48:03.809728Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"155.645119ms","expected-duration":"100ms","prefix":"","request":"header:<ID:864015663986166422 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.143\" mod_revision:1810 > success:<request_put:<key:\"/registry/masterleases/192.168.39.143\" value_size:67 lease:864015663986166420 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.143\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-03T17:48:03.810047Z","caller":"traceutil/trace.go:172","msg":"trace[1660417619] transaction","detail":"{read_only:false; response_revision:1953; number_of_response:1; }","duration":"175.337059ms","start":"2025-10-03T17:48:03.634683Z","end":"2025-10-03T17:48:03.810020Z","steps":["trace[1660417619] 'process raft request'  (duration: 19.089214ms)","trace[1660417619] 'compare'  (duration: 154.97924ms)"],"step_count":2}
	
	
	==> kernel <==
	 17:49:38 up 6 min,  0 users,  load average: 0.39, 1.09, 0.64
	Linux addons-925003 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [eb228ee59b10d57c340cc0dd3f48989cb5aa29fd246294c4459762194e06e8fb] <==
	E1003 17:45:01.923792       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.196.43:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.196.43:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.196.43:443: connect: connection refused" logger="UnhandledError"
	E1003 17:45:01.932194       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.196.43:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.196.43:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.196.43:443: connect: connection refused" logger="UnhandledError"
	I1003 17:45:02.033466       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1003 17:46:46.299896       1 conn.go:339] Error on socket receive: read tcp 192.168.39.143:8443->192.168.39.1:39564: use of closed network connection
	E1003 17:46:46.510833       1 conn.go:339] Error on socket receive: read tcp 192.168.39.143:8443->192.168.39.1:39598: use of closed network connection
	I1003 17:46:55.742666       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.182.151"}
	I1003 17:47:02.939622       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1003 17:47:08.776826       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1003 17:47:08.977249       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.139.150"}
	E1003 17:47:34.978633       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1003 17:47:40.828107       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1003 17:47:55.530529       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1003 17:47:55.530795       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1003 17:47:55.583367       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1003 17:47:55.583406       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1003 17:47:55.610276       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1003 17:47:55.610393       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1003 17:47:55.645459       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1003 17:47:55.645560       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1003 17:47:55.669731       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1003 17:47:55.669887       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1003 17:47:56.646599       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1003 17:47:56.670096       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1003 17:47:56.704701       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	I1003 17:49:36.568231       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.92.192"}
	
	
	==> kube-controller-manager [d96d60a1f8a0f595c84e0c10c3fbf7637deed34bf1a4300d8ec47a79feca2864] <==
	E1003 17:48:00.828087       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1003 17:48:04.439844       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1003 17:48:04.441015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1003 17:48:04.789908       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1003 17:48:04.790855       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1003 17:48:05.894774       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1003 17:48:05.895731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1003 17:48:12.740349       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1003 17:48:12.741397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1003 17:48:14.861279       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1003 17:48:14.862302       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1003 17:48:14.871790       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1003 17:48:14.873413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1003 17:48:28.049170       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1003 17:48:28.050989       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1003 17:48:33.123166       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1003 17:48:33.125367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1003 17:48:33.554262       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1003 17:48:33.555342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1003 17:49:00.233471       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1003 17:49:00.234630       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1003 17:49:03.385910       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1003 17:49:03.387056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1003 17:49:17.803780       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1003 17:49:17.804999       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [86530f71cc4817d6977b8512de0e341322df00d2b8fc24805e2e089ad6d1c2c7] <==
	I1003 17:44:00.872074       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1003 17:44:00.973749       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1003 17:44:00.973810       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.143"]
	E1003 17:44:00.973897       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1003 17:44:01.110328       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1003 17:44:01.110428       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1003 17:44:01.110584       1 server_linux.go:132] "Using iptables Proxier"
	I1003 17:44:01.141409       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1003 17:44:01.142079       1 server.go:527] "Version info" version="v1.34.1"
	I1003 17:44:01.142094       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 17:44:01.152213       1 config.go:200] "Starting service config controller"
	I1003 17:44:01.152227       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1003 17:44:01.152261       1 config.go:106] "Starting endpoint slice config controller"
	I1003 17:44:01.152265       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1003 17:44:01.152276       1 config.go:403] "Starting serviceCIDR config controller"
	I1003 17:44:01.152280       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1003 17:44:01.159659       1 config.go:309] "Starting node config controller"
	I1003 17:44:01.159673       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1003 17:44:01.159679       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1003 17:44:01.252408       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1003 17:44:01.252417       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1003 17:44:01.252448       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f06905b9e5a21054268aad00c96765bce69aee31005fcb41e7f5eccd95091429] <==
	E1003 17:43:51.777451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1003 17:43:51.777494       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1003 17:43:51.777579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1003 17:43:51.777596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1003 17:43:51.777681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1003 17:43:51.777764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1003 17:43:51.777807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1003 17:43:52.616762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1003 17:43:52.658268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1003 17:43:52.671705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1003 17:43:52.702707       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1003 17:43:52.788996       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1003 17:43:52.821192       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1003 17:43:52.822866       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1003 17:43:52.824026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1003 17:43:52.868250       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1003 17:43:52.918806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1003 17:43:52.927073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1003 17:43:52.951589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1003 17:43:52.976060       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1003 17:43:52.984820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1003 17:43:52.992007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1003 17:43:52.999524       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1003 17:43:53.044172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1003 17:43:55.862140       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 03 17:47:58 addons-925003 kubelet[1517]: I1003 17:47:58.803121    1517 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d2caacfefd2432fe475efa24802397749913a8b37538de46ae586955b2d7e8d"} err="failed to get container status \"4d2caacfefd2432fe475efa24802397749913a8b37538de46ae586955b2d7e8d\": rpc error: code = NotFound desc = could not find container \"4d2caacfefd2432fe475efa24802397749913a8b37538de46ae586955b2d7e8d\": container with ID starting with 4d2caacfefd2432fe475efa24802397749913a8b37538de46ae586955b2d7e8d not found: ID does not exist"
	Oct 03 17:48:04 addons-925003 kubelet[1517]: E1003 17:48:04.774081    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759513684773560635  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 03 17:48:04 addons-925003 kubelet[1517]: E1003 17:48:04.774105    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759513684773560635  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 03 17:48:12 addons-925003 kubelet[1517]: I1003 17:48:12.506393    1517 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-46299" secret="" err="secret \"gcp-auth\" not found"
	Oct 03 17:48:14 addons-925003 kubelet[1517]: E1003 17:48:14.777194    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759513694776619007  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 03 17:48:14 addons-925003 kubelet[1517]: E1003 17:48:14.777223    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759513694776619007  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 03 17:48:24 addons-925003 kubelet[1517]: E1003 17:48:24.780247    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759513704779784164  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 03 17:48:24 addons-925003 kubelet[1517]: E1003 17:48:24.780271    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759513704779784164  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 03 17:48:34 addons-925003 kubelet[1517]: E1003 17:48:34.782970    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759513714782467834  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 03 17:48:34 addons-925003 kubelet[1517]: E1003 17:48:34.783013    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759513714782467834  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 03 17:48:44 addons-925003 kubelet[1517]: E1003 17:48:44.785846    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759513724785286501  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 03 17:48:44 addons-925003 kubelet[1517]: E1003 17:48:44.785878    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759513724785286501  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 03 17:48:54 addons-925003 kubelet[1517]: E1003 17:48:54.789397    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759513734788884753  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 03 17:48:54 addons-925003 kubelet[1517]: E1003 17:48:54.789533    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759513734788884753  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 03 17:48:58 addons-925003 kubelet[1517]: I1003 17:48:58.506311    1517 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 03 17:49:04 addons-925003 kubelet[1517]: E1003 17:49:04.792292    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759513744791764615  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 03 17:49:04 addons-925003 kubelet[1517]: E1003 17:49:04.792319    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759513744791764615  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 03 17:49:14 addons-925003 kubelet[1517]: E1003 17:49:14.795504    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759513754794896582  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 03 17:49:14 addons-925003 kubelet[1517]: E1003 17:49:14.795535    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759513754794896582  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 03 17:49:24 addons-925003 kubelet[1517]: E1003 17:49:24.799508    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759513764799098368  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 03 17:49:24 addons-925003 kubelet[1517]: E1003 17:49:24.799544    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759513764799098368  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 03 17:49:30 addons-925003 kubelet[1517]: I1003 17:49:30.506686    1517 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-46299" secret="" err="secret \"gcp-auth\" not found"
	Oct 03 17:49:34 addons-925003 kubelet[1517]: E1003 17:49:34.803038    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759513774802543244  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 03 17:49:34 addons-925003 kubelet[1517]: E1003 17:49:34.803064    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759513774802543244  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 03 17:49:36 addons-925003 kubelet[1517]: I1003 17:49:36.632225    1517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcgnl\" (UniqueName: \"kubernetes.io/projected/0946dda3-4cb0-4223-9281-bfd32deba03c-kube-api-access-vcgnl\") pod \"hello-world-app-5d498dc89-l2ncl\" (UID: \"0946dda3-4cb0-4223-9281-bfd32deba03c\") " pod="default/hello-world-app-5d498dc89-l2ncl"
	
	
	==> storage-provisioner [c590b00aa43a9f567215f5c9359338b7e236c3dbce6ac47c0ec613c354e87a09] <==
	W1003 17:49:14.019080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 17:49:16.024353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 17:49:16.032912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 17:49:18.036780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 17:49:18.043650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 17:49:20.049666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 17:49:20.057303       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 17:49:22.060875       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 17:49:22.068307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 17:49:24.073076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 17:49:24.081128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 17:49:26.085112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 17:49:26.090429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 17:49:28.094047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 17:49:28.100199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 17:49:30.103515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 17:49:30.110009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 17:49:32.112832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 17:49:32.118564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 17:49:34.123184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 17:49:34.128586       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 17:49:36.132183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 17:49:36.140158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 17:49:38.154862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 17:49:38.164062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-925003 -n addons-925003
helpers_test.go:269: (dbg) Run:  kubectl --context addons-925003 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-l2ncl ingress-nginx-admission-create-zq6zr ingress-nginx-admission-patch-72455
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-925003 describe pod hello-world-app-5d498dc89-l2ncl ingress-nginx-admission-create-zq6zr ingress-nginx-admission-patch-72455
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-925003 describe pod hello-world-app-5d498dc89-l2ncl ingress-nginx-admission-create-zq6zr ingress-nginx-admission-patch-72455: exit status 1 (93.84043ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-l2ncl
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-925003/192.168.39.143
	Start Time:       Fri, 03 Oct 2025 17:49:36 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vcgnl (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vcgnl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-l2ncl to addons-925003
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-zq6zr" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-72455" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-925003 describe pod hello-world-app-5d498dc89-l2ncl ingress-nginx-admission-create-zq6zr ingress-nginx-admission-patch-72455: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-925003 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-925003 addons disable ingress-dns --alsologtostderr -v=1: (1.463248521s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-925003 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-925003 addons disable ingress --alsologtostderr -v=1: (7.779586288s)
--- FAIL: TestAddons/parallel/Ingress (159.83s)

                                                
                                    
x
+
TestPreload (160.39s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-442251 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-442251 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (1m33.885158114s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-442251 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-442251 image pull gcr.io/k8s-minikube/busybox: (3.432650059s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-442251
E1003 18:35:01.130511   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/functional-965419/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-442251: (8.347562412s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-442251 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-442251 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (51.786291178s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-442251 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-10-03 18:35:58.061088088 +0000 UTC m=+3202.652488811
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-442251 -n test-preload-442251
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-442251 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-442251 logs -n 25: (1.14711406s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-137840 ssh -n multinode-137840-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-137840     │ jenkins │ v1.37.0 │ 03 Oct 25 18:22 UTC │ 03 Oct 25 18:22 UTC │
	│ ssh     │ multinode-137840 ssh -n multinode-137840 sudo cat /home/docker/cp-test_multinode-137840-m03_multinode-137840.txt                                          │ multinode-137840     │ jenkins │ v1.37.0 │ 03 Oct 25 18:22 UTC │ 03 Oct 25 18:22 UTC │
	│ cp      │ multinode-137840 cp multinode-137840-m03:/home/docker/cp-test.txt multinode-137840-m02:/home/docker/cp-test_multinode-137840-m03_multinode-137840-m02.txt │ multinode-137840     │ jenkins │ v1.37.0 │ 03 Oct 25 18:22 UTC │ 03 Oct 25 18:22 UTC │
	│ ssh     │ multinode-137840 ssh -n multinode-137840-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-137840     │ jenkins │ v1.37.0 │ 03 Oct 25 18:22 UTC │ 03 Oct 25 18:22 UTC │
	│ ssh     │ multinode-137840 ssh -n multinode-137840-m02 sudo cat /home/docker/cp-test_multinode-137840-m03_multinode-137840-m02.txt                                  │ multinode-137840     │ jenkins │ v1.37.0 │ 03 Oct 25 18:22 UTC │ 03 Oct 25 18:22 UTC │
	│ node    │ multinode-137840 node stop m03                                                                                                                            │ multinode-137840     │ jenkins │ v1.37.0 │ 03 Oct 25 18:22 UTC │ 03 Oct 25 18:22 UTC │
	│ node    │ multinode-137840 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-137840     │ jenkins │ v1.37.0 │ 03 Oct 25 18:22 UTC │ 03 Oct 25 18:23 UTC │
	│ node    │ list -p multinode-137840                                                                                                                                  │ multinode-137840     │ jenkins │ v1.37.0 │ 03 Oct 25 18:23 UTC │                     │
	│ stop    │ -p multinode-137840                                                                                                                                       │ multinode-137840     │ jenkins │ v1.37.0 │ 03 Oct 25 18:23 UTC │ 03 Oct 25 18:26 UTC │
	│ start   │ -p multinode-137840 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-137840     │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:28 UTC │
	│ node    │ list -p multinode-137840                                                                                                                                  │ multinode-137840     │ jenkins │ v1.37.0 │ 03 Oct 25 18:28 UTC │                     │
	│ node    │ multinode-137840 node delete m03                                                                                                                          │ multinode-137840     │ jenkins │ v1.37.0 │ 03 Oct 25 18:28 UTC │ 03 Oct 25 18:28 UTC │
	│ stop    │ multinode-137840 stop                                                                                                                                     │ multinode-137840     │ jenkins │ v1.37.0 │ 03 Oct 25 18:28 UTC │ 03 Oct 25 18:31 UTC │
	│ start   │ -p multinode-137840 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-137840     │ jenkins │ v1.37.0 │ 03 Oct 25 18:31 UTC │ 03 Oct 25 18:32 UTC │
	│ node    │ list -p multinode-137840                                                                                                                                  │ multinode-137840     │ jenkins │ v1.37.0 │ 03 Oct 25 18:32 UTC │                     │
	│ start   │ -p multinode-137840-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-137840-m02 │ jenkins │ v1.37.0 │ 03 Oct 25 18:32 UTC │                     │
	│ start   │ -p multinode-137840-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-137840-m03 │ jenkins │ v1.37.0 │ 03 Oct 25 18:32 UTC │ 03 Oct 25 18:33 UTC │
	│ node    │ add -p multinode-137840                                                                                                                                   │ multinode-137840     │ jenkins │ v1.37.0 │ 03 Oct 25 18:33 UTC │                     │
	│ delete  │ -p multinode-137840-m03                                                                                                                                   │ multinode-137840-m03 │ jenkins │ v1.37.0 │ 03 Oct 25 18:33 UTC │ 03 Oct 25 18:33 UTC │
	│ delete  │ -p multinode-137840                                                                                                                                       │ multinode-137840     │ jenkins │ v1.37.0 │ 03 Oct 25 18:33 UTC │ 03 Oct 25 18:33 UTC │
	│ start   │ -p test-preload-442251 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0   │ test-preload-442251  │ jenkins │ v1.37.0 │ 03 Oct 25 18:33 UTC │ 03 Oct 25 18:34 UTC │
	│ image   │ test-preload-442251 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-442251  │ jenkins │ v1.37.0 │ 03 Oct 25 18:34 UTC │ 03 Oct 25 18:34 UTC │
	│ stop    │ -p test-preload-442251                                                                                                                                    │ test-preload-442251  │ jenkins │ v1.37.0 │ 03 Oct 25 18:34 UTC │ 03 Oct 25 18:35 UTC │
	│ start   │ -p test-preload-442251 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                           │ test-preload-442251  │ jenkins │ v1.37.0 │ 03 Oct 25 18:35 UTC │ 03 Oct 25 18:35 UTC │
	│ image   │ test-preload-442251 image list                                                                                                                            │ test-preload-442251  │ jenkins │ v1.37.0 │ 03 Oct 25 18:35 UTC │ 03 Oct 25 18:35 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 18:35:06
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 18:35:06.125252   35522 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:35:06.125481   35522 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:35:06.125489   35522 out.go:374] Setting ErrFile to fd 2...
	I1003 18:35:06.125493   35522 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:35:06.125671   35522 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8656/.minikube/bin
	I1003 18:35:06.126120   35522 out.go:368] Setting JSON to false
	I1003 18:35:06.126983   35522 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4650,"bootTime":1759511856,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 18:35:06.127075   35522 start.go:140] virtualization: kvm guest
	I1003 18:35:06.129355   35522 out.go:179] * [test-preload-442251] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1003 18:35:06.131057   35522 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:35:06.131049   35522 notify.go:220] Checking for updates...
	I1003 18:35:06.134356   35522 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:35:06.136040   35522 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8656/kubeconfig
	I1003 18:35:06.137544   35522 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8656/.minikube
	I1003 18:35:06.139248   35522 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 18:35:06.144399   35522 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:35:06.146116   35522 config.go:182] Loaded profile config "test-preload-442251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1003 18:35:06.147998   35522 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1003 18:35:06.149319   35522 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:35:06.184339   35522 out.go:179] * Using the kvm2 driver based on existing profile
	I1003 18:35:06.185824   35522 start.go:304] selected driver: kvm2
	I1003 18:35:06.185842   35522 start.go:924] validating driver "kvm2" against &{Name:test-preload-442251 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-442251 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:35:06.185972   35522 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:35:06.186872   35522 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 18:35:06.186908   35522 cni.go:84] Creating CNI manager for ""
	I1003 18:35:06.186951   35522 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1003 18:35:06.187011   35522 start.go:348] cluster config:
	{Name:test-preload-442251 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-442251 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:35:06.187099   35522 iso.go:125] acquiring lock: {Name:mk4ce219bd5cf5058f69eb8b10ebc9d907f5f7b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:35:06.188855   35522 out.go:179] * Starting "test-preload-442251" primary control-plane node in "test-preload-442251" cluster
	I1003 18:35:06.190386   35522 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1003 18:35:06.287219   35522 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1003 18:35:06.287246   35522 cache.go:58] Caching tarball of preloaded images
	I1003 18:35:06.287435   35522 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1003 18:35:06.289655   35522 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1003 18:35:06.291065   35522 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1003 18:35:06.389396   35522 preload.go:290] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1003 18:35:06.389451   35522 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21625-8656/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1003 18:35:16.195876   35522 cache.go:61] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1003 18:35:16.196056   35522 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/test-preload-442251/config.json ...
	I1003 18:35:16.196994   35522 start.go:360] acquireMachinesLock for test-preload-442251: {Name:mk6fc4b452aa995b01198c8d80bd9bad940152be Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 18:35:16.197069   35522 start.go:364] duration metric: took 49.16µs to acquireMachinesLock for "test-preload-442251"
	I1003 18:35:16.197091   35522 start.go:96] Skipping create...Using existing machine configuration
	I1003 18:35:16.197099   35522 fix.go:54] fixHost starting: 
	I1003 18:35:16.199190   35522 fix.go:112] recreateIfNeeded on test-preload-442251: state=Stopped err=<nil>
	W1003 18:35:16.199215   35522 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 18:35:16.200735   35522 out.go:252] * Restarting existing kvm2 VM for "test-preload-442251" ...
	I1003 18:35:16.200798   35522 main.go:141] libmachine: starting domain...
	I1003 18:35:16.200815   35522 main.go:141] libmachine: ensuring networks are active...
	I1003 18:35:16.201537   35522 main.go:141] libmachine: Ensuring network default is active
	I1003 18:35:16.201911   35522 main.go:141] libmachine: Ensuring network mk-test-preload-442251 is active
	I1003 18:35:16.202340   35522 main.go:141] libmachine: getting domain XML...
	I1003 18:35:16.203604   35522 main.go:141] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-442251</name>
	  <uuid>1e1efe20-120d-45fa-848b-ef40a6cc8d1f</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21625-8656/.minikube/machines/test-preload-442251/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21625-8656/.minikube/machines/test-preload-442251/test-preload-442251.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:01:50:88'/>
	      <source network='mk-test-preload-442251'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:2c:ba:af'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1003 18:35:17.483117   35522 main.go:141] libmachine: waiting for domain to start...
	I1003 18:35:17.484388   35522 main.go:141] libmachine: domain is now running
	I1003 18:35:17.484403   35522 main.go:141] libmachine: waiting for IP...
	I1003 18:35:17.485634   35522 main.go:141] libmachine: domain test-preload-442251 has defined MAC address 52:54:00:01:50:88 in network mk-test-preload-442251
	I1003 18:35:17.486865   35522 main.go:141] libmachine: domain test-preload-442251 has current primary IP address 192.168.39.229 and MAC address 52:54:00:01:50:88 in network mk-test-preload-442251
	I1003 18:35:17.486880   35522 main.go:141] libmachine: found domain IP: 192.168.39.229
	I1003 18:35:17.486886   35522 main.go:141] libmachine: reserving static IP address...
	I1003 18:35:17.487365   35522 main.go:141] libmachine: found host DHCP lease matching {name: "test-preload-442251", mac: "52:54:00:01:50:88", ip: "192.168.39.229"} in network mk-test-preload-442251: {Iface:virbr1 ExpiryTime:2025-10-03 19:33:35 +0000 UTC Type:0 Mac:52:54:00:01:50:88 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:test-preload-442251 Clientid:01:52:54:00:01:50:88}
	I1003 18:35:17.487390   35522 main.go:141] libmachine: skip adding static IP to network mk-test-preload-442251 - found existing host DHCP lease matching {name: "test-preload-442251", mac: "52:54:00:01:50:88", ip: "192.168.39.229"}
	I1003 18:35:17.487400   35522 main.go:141] libmachine: reserved static IP address 192.168.39.229 for domain test-preload-442251
	I1003 18:35:17.487405   35522 main.go:141] libmachine: waiting for SSH...
	I1003 18:35:17.487410   35522 main.go:141] libmachine: Getting to WaitForSSH function...
	I1003 18:35:17.490027   35522 main.go:141] libmachine: domain test-preload-442251 has defined MAC address 52:54:00:01:50:88 in network mk-test-preload-442251
	I1003 18:35:17.490474   35522 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:01:50:88", ip: ""} in network mk-test-preload-442251: {Iface:virbr1 ExpiryTime:2025-10-03 19:33:35 +0000 UTC Type:0 Mac:52:54:00:01:50:88 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:test-preload-442251 Clientid:01:52:54:00:01:50:88}
	I1003 18:35:17.490497   35522 main.go:141] libmachine: domain test-preload-442251 has defined IP address 192.168.39.229 and MAC address 52:54:00:01:50:88 in network mk-test-preload-442251
	I1003 18:35:17.490710   35522 main.go:141] libmachine: Using SSH client type: native
	I1003 18:35:17.490978   35522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I1003 18:35:17.490991   35522 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1003 18:35:20.552179   35522 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I1003 18:35:26.632122   35522 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I1003 18:35:29.748754   35522 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 18:35:29.752511   35522 main.go:141] libmachine: domain test-preload-442251 has defined MAC address 52:54:00:01:50:88 in network mk-test-preload-442251
	I1003 18:35:29.753175   35522 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:01:50:88", ip: ""} in network mk-test-preload-442251: {Iface:virbr1 ExpiryTime:2025-10-03 19:35:27 +0000 UTC Type:0 Mac:52:54:00:01:50:88 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:test-preload-442251 Clientid:01:52:54:00:01:50:88}
	I1003 18:35:29.753205   35522 main.go:141] libmachine: domain test-preload-442251 has defined IP address 192.168.39.229 and MAC address 52:54:00:01:50:88 in network mk-test-preload-442251
	I1003 18:35:29.753499   35522 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/test-preload-442251/config.json ...
	I1003 18:35:29.753741   35522 machine.go:93] provisionDockerMachine start ...
	I1003 18:35:29.756696   35522 main.go:141] libmachine: domain test-preload-442251 has defined MAC address 52:54:00:01:50:88 in network mk-test-preload-442251
	I1003 18:35:29.757233   35522 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:01:50:88", ip: ""} in network mk-test-preload-442251: {Iface:virbr1 ExpiryTime:2025-10-03 19:35:27 +0000 UTC Type:0 Mac:52:54:00:01:50:88 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:test-preload-442251 Clientid:01:52:54:00:01:50:88}
	I1003 18:35:29.757260   35522 main.go:141] libmachine: domain test-preload-442251 has defined IP address 192.168.39.229 and MAC address 52:54:00:01:50:88 in network mk-test-preload-442251
	I1003 18:35:29.757428   35522 main.go:141] libmachine: Using SSH client type: native
	I1003 18:35:29.757678   35522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I1003 18:35:29.757692   35522 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 18:35:29.869808   35522 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1003 18:35:29.869849   35522 buildroot.go:166] provisioning hostname "test-preload-442251"
	I1003 18:35:29.872761   35522 main.go:141] libmachine: domain test-preload-442251 has defined MAC address 52:54:00:01:50:88 in network mk-test-preload-442251
	I1003 18:35:29.873189   35522 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:01:50:88", ip: ""} in network mk-test-preload-442251: {Iface:virbr1 ExpiryTime:2025-10-03 19:35:27 +0000 UTC Type:0 Mac:52:54:00:01:50:88 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:test-preload-442251 Clientid:01:52:54:00:01:50:88}
	I1003 18:35:29.873218   35522 main.go:141] libmachine: domain test-preload-442251 has defined IP address 192.168.39.229 and MAC address 52:54:00:01:50:88 in network mk-test-preload-442251
	I1003 18:35:29.873426   35522 main.go:141] libmachine: Using SSH client type: native
	I1003 18:35:29.873681   35522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I1003 18:35:29.873699   35522 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-442251 && echo "test-preload-442251" | sudo tee /etc/hostname
	I1003 18:35:30.011926   35522 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-442251
	
	I1003 18:35:30.015076   35522 main.go:141] libmachine: domain test-preload-442251 has defined MAC address 52:54:00:01:50:88 in network mk-test-preload-442251
	I1003 18:35:30.015627   35522 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:01:50:88", ip: ""} in network mk-test-preload-442251: {Iface:virbr1 ExpiryTime:2025-10-03 19:35:27 +0000 UTC Type:0 Mac:52:54:00:01:50:88 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:test-preload-442251 Clientid:01:52:54:00:01:50:88}
	I1003 18:35:30.015675   35522 main.go:141] libmachine: domain test-preload-442251 has defined IP address 192.168.39.229 and MAC address 52:54:00:01:50:88 in network mk-test-preload-442251
	I1003 18:35:30.015902   35522 main.go:141] libmachine: Using SSH client type: native
	I1003 18:35:30.016183   35522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I1003 18:35:30.016209   35522 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-442251' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-442251/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-442251' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 18:35:30.163983   35522 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 18:35:30.164045   35522 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21625-8656/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-8656/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-8656/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-8656/.minikube}
	I1003 18:35:30.164089   35522 buildroot.go:174] setting up certificates
	I1003 18:35:30.164100   35522 provision.go:84] configureAuth start
	I1003 18:35:30.167694   35522 main.go:141] libmachine: domain test-preload-442251 has defined MAC address 52:54:00:01:50:88 in network mk-test-preload-442251
	I1003 18:35:30.168078   35522 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:01:50:88", ip: ""} in network mk-test-preload-442251: {Iface:virbr1 ExpiryTime:2025-10-03 19:35:27 +0000 UTC Type:0 Mac:52:54:00:01:50:88 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:test-preload-442251 Clientid:01:52:54:00:01:50:88}
	I1003 18:35:30.168099   35522 main.go:141] libmachine: domain test-preload-442251 has defined IP address 192.168.39.229 and MAC address 52:54:00:01:50:88 in network mk-test-preload-442251
	I1003 18:35:30.170344   35522 main.go:141] libmachine: domain test-preload-442251 has defined MAC address 52:54:00:01:50:88 in network mk-test-preload-442251
	I1003 18:35:30.170721   35522 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:01:50:88", ip: ""} in network mk-test-preload-442251: {Iface:virbr1 ExpiryTime:2025-10-03 19:35:27 +0000 UTC Type:0 Mac:52:54:00:01:50:88 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:test-preload-442251 Clientid:01:52:54:00:01:50:88}
	I1003 18:35:30.170745   35522 main.go:141] libmachine: domain test-preload-442251 has defined IP address 192.168.39.229 and MAC address 52:54:00:01:50:88 in network mk-test-preload-442251
	I1003 18:35:30.170883   35522 provision.go:143] copyHostCerts
	I1003 18:35:30.170932   35522 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8656/.minikube/ca.pem, removing ...
	I1003 18:35:30.170950   35522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8656/.minikube/ca.pem
	I1003 18:35:30.171020   35522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8656/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-8656/.minikube/ca.pem (1078 bytes)
	I1003 18:35:30.171110   35522 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8656/.minikube/cert.pem, removing ...
	I1003 18:35:30.171117   35522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8656/.minikube/cert.pem
	I1003 18:35:30.171142   35522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8656/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-8656/.minikube/cert.pem (1123 bytes)
	I1003 18:35:30.171209   35522 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-8656/.minikube/key.pem, removing ...
	I1003 18:35:30.171217   35522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-8656/.minikube/key.pem
	I1003 18:35:30.171241   35522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-8656/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-8656/.minikube/key.pem (1679 bytes)
	I1003 18:35:30.171290   35522 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-8656/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-8656/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-8656/.minikube/certs/ca-key.pem org=jenkins.test-preload-442251 san=[127.0.0.1 192.168.39.229 localhost minikube test-preload-442251]
	I1003 18:35:30.261659   35522 provision.go:177] copyRemoteCerts
	I1003 18:35:30.261719   35522 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 18:35:30.264486   35522 main.go:141] libmachine: domain test-preload-442251 has defined MAC address 52:54:00:01:50:88 in network mk-test-preload-442251
	I1003 18:35:30.264927   35522 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:01:50:88", ip: ""} in network mk-test-preload-442251: {Iface:virbr1 ExpiryTime:2025-10-03 19:35:27 +0000 UTC Type:0 Mac:52:54:00:01:50:88 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:test-preload-442251 Clientid:01:52:54:00:01:50:88}
	I1003 18:35:30.264948   35522 main.go:141] libmachine: domain test-preload-442251 has defined IP address 192.168.39.229 and MAC address 52:54:00:01:50:88 in network mk-test-preload-442251
	I1003 18:35:30.265099   35522 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/test-preload-442251/id_rsa Username:docker}
	I1003 18:35:30.352174   35522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8656/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1003 18:35:30.382858   35522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8656/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1003 18:35:30.414573   35522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8656/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 18:35:30.447316   35522 provision.go:87] duration metric: took 283.198605ms to configureAuth
	I1003 18:35:30.447360   35522 buildroot.go:189] setting minikube options for container-runtime
	I1003 18:35:30.447557   35522 config.go:182] Loaded profile config "test-preload-442251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1003 18:35:30.451251   35522 main.go:141] libmachine: domain test-preload-442251 has defined MAC address 52:54:00:01:50:88 in network mk-test-preload-442251
	I1003 18:35:30.451797   35522 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:01:50:88", ip: ""} in network mk-test-preload-442251: {Iface:virbr1 ExpiryTime:2025-10-03 19:35:27 +0000 UTC Type:0 Mac:52:54:00:01:50:88 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:test-preload-442251 Clientid:01:52:54:00:01:50:88}
	I1003 18:35:30.451832   35522 main.go:141] libmachine: domain test-preload-442251 has defined IP address 192.168.39.229 and MAC address 52:54:00:01:50:88 in network mk-test-preload-442251
	I1003 18:35:30.452092   35522 main.go:141] libmachine: Using SSH client type: native
	I1003 18:35:30.452280   35522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I1003 18:35:30.452294   35522 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 18:35:30.708569   35522 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 18:35:30.708604   35522 machine.go:96] duration metric: took 954.847144ms to provisionDockerMachine
	I1003 18:35:30.708624   35522 start.go:293] postStartSetup for "test-preload-442251" (driver="kvm2")
	I1003 18:35:30.708636   35522 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 18:35:30.708696   35522 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 18:35:30.712104   35522 main.go:141] libmachine: domain test-preload-442251 has defined MAC address 52:54:00:01:50:88 in network mk-test-preload-442251
	I1003 18:35:30.712536   35522 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:01:50:88", ip: ""} in network mk-test-preload-442251: {Iface:virbr1 ExpiryTime:2025-10-03 19:35:27 +0000 UTC Type:0 Mac:52:54:00:01:50:88 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:test-preload-442251 Clientid:01:52:54:00:01:50:88}
	I1003 18:35:30.712564   35522 main.go:141] libmachine: domain test-preload-442251 has defined IP address 192.168.39.229 and MAC address 52:54:00:01:50:88 in network mk-test-preload-442251
	I1003 18:35:30.712754   35522 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/test-preload-442251/id_rsa Username:docker}
	I1003 18:35:30.802234   35522 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 18:35:30.807467   35522 info.go:137] Remote host: Buildroot 2025.02
	I1003 18:35:30.807499   35522 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8656/.minikube/addons for local assets ...
	I1003 18:35:30.807593   35522 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-8656/.minikube/files for local assets ...
	I1003 18:35:30.807721   35522 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-8656/.minikube/files/etc/ssl/certs/125642.pem -> 125642.pem in /etc/ssl/certs
	I1003 18:35:30.807876   35522 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 18:35:30.823070   35522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8656/.minikube/files/etc/ssl/certs/125642.pem --> /etc/ssl/certs/125642.pem (1708 bytes)
	I1003 18:35:30.866218   35522 start.go:296] duration metric: took 157.579556ms for postStartSetup
	I1003 18:35:30.866259   35522 fix.go:56] duration metric: took 14.669161374s for fixHost
	I1003 18:35:30.869313   35522 main.go:141] libmachine: domain test-preload-442251 has defined MAC address 52:54:00:01:50:88 in network mk-test-preload-442251
	I1003 18:35:30.869768   35522 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:01:50:88", ip: ""} in network mk-test-preload-442251: {Iface:virbr1 ExpiryTime:2025-10-03 19:35:27 +0000 UTC Type:0 Mac:52:54:00:01:50:88 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:test-preload-442251 Clientid:01:52:54:00:01:50:88}
	I1003 18:35:30.869816   35522 main.go:141] libmachine: domain test-preload-442251 has defined IP address 192.168.39.229 and MAC address 52:54:00:01:50:88 in network mk-test-preload-442251
	I1003 18:35:30.869999   35522 main.go:141] libmachine: Using SSH client type: native
	I1003 18:35:30.870261   35522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I1003 18:35:30.870278   35522 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1003 18:35:30.982601   35522 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759516530.936421000
	
	I1003 18:35:30.982629   35522 fix.go:216] guest clock: 1759516530.936421000
	I1003 18:35:30.982639   35522 fix.go:229] Guest: 2025-10-03 18:35:30.936421 +0000 UTC Remote: 2025-10-03 18:35:30.866262184 +0000 UTC m=+24.789746003 (delta=70.158816ms)
	I1003 18:35:30.982664   35522 fix.go:200] guest clock delta is within tolerance: 70.158816ms
	I1003 18:35:30.982671   35522 start.go:83] releasing machines lock for "test-preload-442251", held for 14.785590149s
	I1003 18:35:30.986285   35522 main.go:141] libmachine: domain test-preload-442251 has defined MAC address 52:54:00:01:50:88 in network mk-test-preload-442251
	I1003 18:35:30.986711   35522 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:01:50:88", ip: ""} in network mk-test-preload-442251: {Iface:virbr1 ExpiryTime:2025-10-03 19:35:27 +0000 UTC Type:0 Mac:52:54:00:01:50:88 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:test-preload-442251 Clientid:01:52:54:00:01:50:88}
	I1003 18:35:30.986737   35522 main.go:141] libmachine: domain test-preload-442251 has defined IP address 192.168.39.229 and MAC address 52:54:00:01:50:88 in network mk-test-preload-442251
	I1003 18:35:30.987425   35522 ssh_runner.go:195] Run: cat /version.json
	I1003 18:35:30.987528   35522 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 18:35:30.990470   35522 main.go:141] libmachine: domain test-preload-442251 has defined MAC address 52:54:00:01:50:88 in network mk-test-preload-442251
	I1003 18:35:30.990653   35522 main.go:141] libmachine: domain test-preload-442251 has defined MAC address 52:54:00:01:50:88 in network mk-test-preload-442251
	I1003 18:35:30.990911   35522 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:01:50:88", ip: ""} in network mk-test-preload-442251: {Iface:virbr1 ExpiryTime:2025-10-03 19:35:27 +0000 UTC Type:0 Mac:52:54:00:01:50:88 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:test-preload-442251 Clientid:01:52:54:00:01:50:88}
	I1003 18:35:30.990941   35522 main.go:141] libmachine: domain test-preload-442251 has defined IP address 192.168.39.229 and MAC address 52:54:00:01:50:88 in network mk-test-preload-442251
	I1003 18:35:30.991111   35522 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/test-preload-442251/id_rsa Username:docker}
	I1003 18:35:30.991267   35522 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:01:50:88", ip: ""} in network mk-test-preload-442251: {Iface:virbr1 ExpiryTime:2025-10-03 19:35:27 +0000 UTC Type:0 Mac:52:54:00:01:50:88 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:test-preload-442251 Clientid:01:52:54:00:01:50:88}
	I1003 18:35:30.991299   35522 main.go:141] libmachine: domain test-preload-442251 has defined IP address 192.168.39.229 and MAC address 52:54:00:01:50:88 in network mk-test-preload-442251
	I1003 18:35:30.991453   35522 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/test-preload-442251/id_rsa Username:docker}
	I1003 18:35:31.104596   35522 ssh_runner.go:195] Run: systemctl --version
	I1003 18:35:31.112504   35522 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 18:35:31.260406   35522 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 18:35:31.268082   35522 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 18:35:31.268159   35522 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 18:35:31.288576   35522 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 18:35:31.288616   35522 start.go:495] detecting cgroup driver to use...
	I1003 18:35:31.288680   35522 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 18:35:31.308145   35522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 18:35:31.325054   35522 docker.go:218] disabling cri-docker service (if available) ...
	I1003 18:35:31.325108   35522 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 18:35:31.342641   35522 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 18:35:31.359289   35522 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 18:35:31.507258   35522 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 18:35:31.732217   35522 docker.go:234] disabling docker service ...
	I1003 18:35:31.732282   35522 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 18:35:31.750251   35522 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 18:35:31.766664   35522 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 18:35:31.926610   35522 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 18:35:32.080765   35522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 18:35:32.097533   35522 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 18:35:32.121588   35522 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1003 18:35:32.121667   35522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:35:32.135127   35522 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1003 18:35:32.135222   35522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:35:32.148844   35522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:35:32.161860   35522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:35:32.175206   35522 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 18:35:32.189333   35522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:35:32.202247   35522 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:35:32.224260   35522 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:35:32.237182   35522 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 18:35:32.248973   35522 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1003 18:35:32.249041   35522 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1003 18:35:32.270513   35522 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 18:35:32.283552   35522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:35:32.429019   35522 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 18:35:32.551550   35522 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 18:35:32.551630   35522 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 18:35:32.557701   35522 start.go:563] Will wait 60s for crictl version
	I1003 18:35:32.557771   35522 ssh_runner.go:195] Run: which crictl
	I1003 18:35:32.562194   35522 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1003 18:35:32.605866   35522 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1003 18:35:32.605952   35522 ssh_runner.go:195] Run: crio --version
	I1003 18:35:32.636421   35522 ssh_runner.go:195] Run: crio --version
	I1003 18:35:32.670763   35522 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1003 18:35:32.675549   35522 main.go:141] libmachine: domain test-preload-442251 has defined MAC address 52:54:00:01:50:88 in network mk-test-preload-442251
	I1003 18:35:32.676029   35522 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:01:50:88", ip: ""} in network mk-test-preload-442251: {Iface:virbr1 ExpiryTime:2025-10-03 19:35:27 +0000 UTC Type:0 Mac:52:54:00:01:50:88 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:test-preload-442251 Clientid:01:52:54:00:01:50:88}
	I1003 18:35:32.676053   35522 main.go:141] libmachine: domain test-preload-442251 has defined IP address 192.168.39.229 and MAC address 52:54:00:01:50:88 in network mk-test-preload-442251
	I1003 18:35:32.676327   35522 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1003 18:35:32.681106   35522 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:35:32.697177   35522 kubeadm.go:883] updating cluster {Name:test-preload-442251 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-442251 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 18:35:32.697296   35522 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1003 18:35:32.697351   35522 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:35:32.739199   35522 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1003 18:35:32.739295   35522 ssh_runner.go:195] Run: which lz4
	I1003 18:35:32.744488   35522 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1003 18:35:32.749872   35522 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1003 18:35:32.749922   35522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8656/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1003 18:35:34.257866   35522 crio.go:462] duration metric: took 1.513410147s to copy over tarball
	I1003 18:35:34.257952   35522 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1003 18:35:36.009553   35522 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.751574024s)
	I1003 18:35:36.009586   35522 crio.go:469] duration metric: took 1.751674664s to extract the tarball
	I1003 18:35:36.009593   35522 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1003 18:35:36.051177   35522 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:35:36.097263   35522 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:35:36.097291   35522 cache_images.go:85] Images are preloaded, skipping loading
	I1003 18:35:36.097298   35522 kubeadm.go:934] updating node { 192.168.39.229 8443 v1.32.0 crio true true} ...
	I1003 18:35:36.097396   35522 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-442251 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.229
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-442251 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 18:35:36.097458   35522 ssh_runner.go:195] Run: crio config
	I1003 18:35:36.145365   35522 cni.go:84] Creating CNI manager for ""
	I1003 18:35:36.145396   35522 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1003 18:35:36.145415   35522 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 18:35:36.145435   35522 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.229 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-442251 NodeName:test-preload-442251 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.229"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.229 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 18:35:36.145560   35522 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.229
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-442251"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.229"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.229"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 18:35:36.145653   35522 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1003 18:35:36.158961   35522 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 18:35:36.159047   35522 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 18:35:36.171838   35522 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1003 18:35:36.193852   35522 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 18:35:36.215235   35522 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1003 18:35:36.238063   35522 ssh_runner.go:195] Run: grep 192.168.39.229	control-plane.minikube.internal$ /etc/hosts
	I1003 18:35:36.242489   35522 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.229	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:35:36.258368   35522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:35:36.407206   35522 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:35:36.428043   35522 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/test-preload-442251 for IP: 192.168.39.229
	I1003 18:35:36.428070   35522 certs.go:195] generating shared ca certs ...
	I1003 18:35:36.428086   35522 certs.go:227] acquiring lock for ca certs: {Name:mk4284b70d600b181ba346e84ac85f956eee3efc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:35:36.428252   35522 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-8656/.minikube/ca.key
	I1003 18:35:36.428298   35522 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-8656/.minikube/proxy-client-ca.key
	I1003 18:35:36.428308   35522 certs.go:257] generating profile certs ...
	I1003 18:35:36.428379   35522 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/test-preload-442251/client.key
	I1003 18:35:36.428438   35522 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/test-preload-442251/apiserver.key.53b197cf
	I1003 18:35:36.428480   35522 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/test-preload-442251/proxy-client.key
	I1003 18:35:36.428613   35522 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8656/.minikube/certs/12564.pem (1338 bytes)
	W1003 18:35:36.428642   35522 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-8656/.minikube/certs/12564_empty.pem, impossibly tiny 0 bytes
	I1003 18:35:36.428648   35522 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8656/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 18:35:36.428668   35522 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8656/.minikube/certs/ca.pem (1078 bytes)
	I1003 18:35:36.428688   35522 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8656/.minikube/certs/cert.pem (1123 bytes)
	I1003 18:35:36.428707   35522 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8656/.minikube/certs/key.pem (1679 bytes)
	I1003 18:35:36.428745   35522 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-8656/.minikube/files/etc/ssl/certs/125642.pem (1708 bytes)
	I1003 18:35:36.429316   35522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8656/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 18:35:36.476959   35522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8656/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1003 18:35:36.518194   35522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8656/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 18:35:36.553585   35522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8656/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 18:35:36.585724   35522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/test-preload-442251/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1003 18:35:36.617957   35522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/test-preload-442251/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 18:35:36.649492   35522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/test-preload-442251/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 18:35:36.681217   35522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/test-preload-442251/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1003 18:35:36.712737   35522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8656/.minikube/certs/12564.pem --> /usr/share/ca-certificates/12564.pem (1338 bytes)
	I1003 18:35:36.745484   35522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8656/.minikube/files/etc/ssl/certs/125642.pem --> /usr/share/ca-certificates/125642.pem (1708 bytes)
	I1003 18:35:36.778895   35522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-8656/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 18:35:36.810765   35522 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 18:35:36.834857   35522 ssh_runner.go:195] Run: openssl version
	I1003 18:35:36.842355   35522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 18:35:36.856942   35522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:35:36.862908   35522 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:35:36.862997   35522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:35:36.870799   35522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 18:35:36.885438   35522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12564.pem && ln -fs /usr/share/ca-certificates/12564.pem /etc/ssl/certs/12564.pem"
	I1003 18:35:36.899087   35522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12564.pem
	I1003 18:35:36.904827   35522 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 17:52 /usr/share/ca-certificates/12564.pem
	I1003 18:35:36.904901   35522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12564.pem
	I1003 18:35:36.912743   35522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12564.pem /etc/ssl/certs/51391683.0"
	I1003 18:35:36.927132   35522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125642.pem && ln -fs /usr/share/ca-certificates/125642.pem /etc/ssl/certs/125642.pem"
	I1003 18:35:36.941960   35522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125642.pem
	I1003 18:35:36.947743   35522 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 17:52 /usr/share/ca-certificates/125642.pem
	I1003 18:35:36.947819   35522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125642.pem
	I1003 18:35:36.955613   35522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125642.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 18:35:36.969954   35522 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 18:35:36.975983   35522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 18:35:36.984046   35522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 18:35:36.991882   35522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 18:35:37.000369   35522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 18:35:37.008577   35522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 18:35:37.016495   35522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 18:35:37.024089   35522 kubeadm.go:400] StartCluster: {Name:test-preload-442251 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-442251 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:35:37.024200   35522 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:35:37.024259   35522 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:35:37.066553   35522 cri.go:89] found id: ""
	I1003 18:35:37.066623   35522 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 18:35:37.079611   35522 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1003 18:35:37.079651   35522 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1003 18:35:37.079710   35522 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 18:35:37.092398   35522 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:35:37.092829   35522 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-442251" does not appear in /home/jenkins/minikube-integration/21625-8656/kubeconfig
	I1003 18:35:37.092951   35522 kubeconfig.go:62] /home/jenkins/minikube-integration/21625-8656/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-442251" cluster setting kubeconfig missing "test-preload-442251" context setting]
	I1003 18:35:37.093200   35522 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8656/kubeconfig: {Name:mk3bf5476cb0b0966e4582f99de822e34e150667 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:35:37.093730   35522 kapi.go:59] client config for test-preload-442251: &rest.Config{Host:"https://192.168.39.229:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21625-8656/.minikube/profiles/test-preload-442251/client.crt", KeyFile:"/home/jenkins/minikube-integration/21625-8656/.minikube/profiles/test-preload-442251/client.key", CAFile:"/home/jenkins/minikube-integration/21625-8656/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c3c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 18:35:37.094132   35522 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1003 18:35:37.094150   35522 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1003 18:35:37.094157   35522 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1003 18:35:37.094163   35522 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1003 18:35:37.094170   35522 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1003 18:35:37.094445   35522 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 18:35:37.107740   35522 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.39.229
	I1003 18:35:37.107777   35522 kubeadm.go:1160] stopping kube-system containers ...
	I1003 18:35:37.107800   35522 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1003 18:35:37.107866   35522 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:35:37.150117   35522 cri.go:89] found id: ""
	I1003 18:35:37.150206   35522 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1003 18:35:37.174773   35522 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:35:37.186978   35522 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:35:37.186999   35522 kubeadm.go:157] found existing configuration files:
	
	I1003 18:35:37.187046   35522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 18:35:37.198981   35522 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 18:35:37.199059   35522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 18:35:37.211930   35522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 18:35:37.223432   35522 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 18:35:37.223515   35522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:35:37.235657   35522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 18:35:37.246813   35522 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 18:35:37.246882   35522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:35:37.259205   35522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 18:35:37.271181   35522 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 18:35:37.271252   35522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:35:37.283944   35522 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 18:35:37.297195   35522 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1003 18:35:37.358665   35522 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1003 18:35:38.264156   35522 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1003 18:35:38.519507   35522 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1003 18:35:38.590768   35522 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1003 18:35:38.673876   35522 api_server.go:52] waiting for apiserver process to appear ...
	I1003 18:35:38.673963   35522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:35:39.174902   35522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:35:39.674102   35522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:35:40.174167   35522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:35:40.674835   35522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:35:41.174689   35522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:35:41.203999   35522 api_server.go:72] duration metric: took 2.530143467s to wait for apiserver process to appear ...
	I1003 18:35:41.204033   35522 api_server.go:88] waiting for apiserver healthz status ...
	I1003 18:35:41.204051   35522 api_server.go:253] Checking apiserver healthz at https://192.168.39.229:8443/healthz ...
	I1003 18:35:41.204535   35522 api_server.go:269] stopped: https://192.168.39.229:8443/healthz: Get "https://192.168.39.229:8443/healthz": dial tcp 192.168.39.229:8443: connect: connection refused
	I1003 18:35:41.704227   35522 api_server.go:253] Checking apiserver healthz at https://192.168.39.229:8443/healthz ...
	I1003 18:35:43.836682   35522 api_server.go:279] https://192.168.39.229:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1003 18:35:43.836708   35522 api_server.go:103] status: https://192.168.39.229:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1003 18:35:43.836722   35522 api_server.go:253] Checking apiserver healthz at https://192.168.39.229:8443/healthz ...
	I1003 18:35:43.884983   35522 api_server.go:279] https://192.168.39.229:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1003 18:35:43.885010   35522 api_server.go:103] status: https://192.168.39.229:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1003 18:35:44.204548   35522 api_server.go:253] Checking apiserver healthz at https://192.168.39.229:8443/healthz ...
	I1003 18:35:44.209409   35522 api_server.go:279] https://192.168.39.229:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1003 18:35:44.209437   35522 api_server.go:103] status: https://192.168.39.229:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1003 18:35:44.705012   35522 api_server.go:253] Checking apiserver healthz at https://192.168.39.229:8443/healthz ...
	I1003 18:35:44.714950   35522 api_server.go:279] https://192.168.39.229:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1003 18:35:44.714981   35522 api_server.go:103] status: https://192.168.39.229:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1003 18:35:45.204750   35522 api_server.go:253] Checking apiserver healthz at https://192.168.39.229:8443/healthz ...
	I1003 18:35:45.215590   35522 api_server.go:279] https://192.168.39.229:8443/healthz returned 200:
	ok
	I1003 18:35:45.224550   35522 api_server.go:141] control plane version: v1.32.0
	I1003 18:35:45.224580   35522 api_server.go:131] duration metric: took 4.020539709s to wait for apiserver health ...
	I1003 18:35:45.224589   35522 cni.go:84] Creating CNI manager for ""
	I1003 18:35:45.224595   35522 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1003 18:35:45.226966   35522 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1003 18:35:45.228699   35522 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1003 18:35:45.256079   35522 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1003 18:35:45.294371   35522 system_pods.go:43] waiting for kube-system pods to appear ...
	I1003 18:35:45.298988   35522 system_pods.go:59] 7 kube-system pods found
	I1003 18:35:45.299033   35522 system_pods.go:61] "coredns-668d6bf9bc-d4ks7" [e2b0c6ed-3e09-4914-ab49-ee58fc82324e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 18:35:45.299043   35522 system_pods.go:61] "etcd-test-preload-442251" [41d7b57b-e1ff-4923-b27a-baae7e8438ba] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1003 18:35:45.299056   35522 system_pods.go:61] "kube-apiserver-test-preload-442251" [edcf0d40-ea5e-4143-8c35-7edfa63478a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1003 18:35:45.299067   35522 system_pods.go:61] "kube-controller-manager-test-preload-442251" [5ada30cd-3b7b-402f-89b2-7f2618065bc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1003 18:35:45.299078   35522 system_pods.go:61] "kube-proxy-x9wrz" [8c60595a-679c-439d-bcb9-3b302dadf3d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1003 18:35:45.299087   35522 system_pods.go:61] "kube-scheduler-test-preload-442251" [8e109cee-8522-4fd7-bac9-09e674fd4d7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1003 18:35:45.299095   35522 system_pods.go:61] "storage-provisioner" [01343337-d587-4601-bbe7-064a87b89671] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1003 18:35:45.299103   35522 system_pods.go:74] duration metric: took 4.701162ms to wait for pod list to return data ...
	I1003 18:35:45.299118   35522 node_conditions.go:102] verifying NodePressure condition ...
	I1003 18:35:45.303333   35522 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1003 18:35:45.303368   35522 node_conditions.go:123] node cpu capacity is 2
	I1003 18:35:45.303382   35522 node_conditions.go:105] duration metric: took 4.257856ms to run NodePressure ...
	I1003 18:35:45.303433   35522 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1003 18:35:45.590106   35522 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1003 18:35:45.594686   35522 kubeadm.go:743] kubelet initialised
	I1003 18:35:45.594717   35522 kubeadm.go:744] duration metric: took 4.579382ms waiting for restarted kubelet to initialise ...
	I1003 18:35:45.594737   35522 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1003 18:35:45.611929   35522 ops.go:34] apiserver oom_adj: -16
	I1003 18:35:45.611963   35522 kubeadm.go:601] duration metric: took 8.532302059s to restartPrimaryControlPlane
	I1003 18:35:45.611975   35522 kubeadm.go:402] duration metric: took 8.587894584s to StartCluster
	I1003 18:35:45.611992   35522 settings.go:142] acquiring lock: {Name:mke9d2b3efcaa2fe43ef0f2a287704ef18b85ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:35:45.612088   35522 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21625-8656/kubeconfig
	I1003 18:35:45.612891   35522 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-8656/kubeconfig: {Name:mk3bf5476cb0b0966e4582f99de822e34e150667 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:35:45.613179   35522 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 18:35:45.613276   35522 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 18:35:45.613363   35522 addons.go:69] Setting storage-provisioner=true in profile "test-preload-442251"
	I1003 18:35:45.613374   35522 config.go:182] Loaded profile config "test-preload-442251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1003 18:35:45.613382   35522 addons.go:238] Setting addon storage-provisioner=true in "test-preload-442251"
	W1003 18:35:45.613390   35522 addons.go:247] addon storage-provisioner should already be in state true
	I1003 18:35:45.613417   35522 host.go:66] Checking if "test-preload-442251" exists ...
	I1003 18:35:45.613414   35522 addons.go:69] Setting default-storageclass=true in profile "test-preload-442251"
	I1003 18:35:45.613448   35522 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-442251"
	I1003 18:35:45.615072   35522 out.go:179] * Verifying Kubernetes components...
	I1003 18:35:45.616456   35522 kapi.go:59] client config for test-preload-442251: &rest.Config{Host:"https://192.168.39.229:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21625-8656/.minikube/profiles/test-preload-442251/client.crt", KeyFile:"/home/jenkins/minikube-integration/21625-8656/.minikube/profiles/test-preload-442251/client.key", CAFile:"/home/jenkins/minikube-integration/21625-8656/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c3c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 18:35:45.616808   35522 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 18:35:45.616853   35522 addons.go:238] Setting addon default-storageclass=true in "test-preload-442251"
	W1003 18:35:45.616869   35522 addons.go:247] addon default-storageclass should already be in state true
	I1003 18:35:45.616883   35522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:35:45.616898   35522 host.go:66] Checking if "test-preload-442251" exists ...
	I1003 18:35:45.618251   35522 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:35:45.618268   35522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 18:35:45.618854   35522 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 18:35:45.618870   35522 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 18:35:45.622837   35522 main.go:141] libmachine: domain test-preload-442251 has defined MAC address 52:54:00:01:50:88 in network mk-test-preload-442251
	I1003 18:35:45.622926   35522 main.go:141] libmachine: domain test-preload-442251 has defined MAC address 52:54:00:01:50:88 in network mk-test-preload-442251
	I1003 18:35:45.623472   35522 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:01:50:88", ip: ""} in network mk-test-preload-442251: {Iface:virbr1 ExpiryTime:2025-10-03 19:35:27 +0000 UTC Type:0 Mac:52:54:00:01:50:88 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:test-preload-442251 Clientid:01:52:54:00:01:50:88}
	I1003 18:35:45.623505   35522 main.go:141] libmachine: domain test-preload-442251 has defined IP address 192.168.39.229 and MAC address 52:54:00:01:50:88 in network mk-test-preload-442251
	I1003 18:35:45.623653   35522 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:01:50:88", ip: ""} in network mk-test-preload-442251: {Iface:virbr1 ExpiryTime:2025-10-03 19:35:27 +0000 UTC Type:0 Mac:52:54:00:01:50:88 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:test-preload-442251 Clientid:01:52:54:00:01:50:88}
	I1003 18:35:45.623688   35522 main.go:141] libmachine: domain test-preload-442251 has defined IP address 192.168.39.229 and MAC address 52:54:00:01:50:88 in network mk-test-preload-442251
	I1003 18:35:45.623751   35522 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/test-preload-442251/id_rsa Username:docker}
	I1003 18:35:45.624246   35522 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/test-preload-442251/id_rsa Username:docker}
	I1003 18:35:45.872642   35522 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:35:45.915023   35522 node_ready.go:35] waiting up to 6m0s for node "test-preload-442251" to be "Ready" ...
	I1003 18:35:46.198559   35522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:35:46.206541   35522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:35:46.884859   35522 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1003 18:35:46.886643   35522 addons.go:514] duration metric: took 1.273370093s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1003 18:35:47.918873   35522 node_ready.go:57] node "test-preload-442251" has "Ready":"False" status (will retry)
	W1003 18:35:49.919043   35522 node_ready.go:57] node "test-preload-442251" has "Ready":"False" status (will retry)
	W1003 18:35:52.419543   35522 node_ready.go:57] node "test-preload-442251" has "Ready":"False" status (will retry)
	I1003 18:35:54.919507   35522 node_ready.go:49] node "test-preload-442251" is "Ready"
	I1003 18:35:54.919614   35522 node_ready.go:38] duration metric: took 9.004541242s for node "test-preload-442251" to be "Ready" ...
	I1003 18:35:54.919638   35522 api_server.go:52] waiting for apiserver process to appear ...
	I1003 18:35:54.919699   35522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:35:54.942945   35522 api_server.go:72] duration metric: took 9.329728237s to wait for apiserver process to appear ...
	I1003 18:35:54.942975   35522 api_server.go:88] waiting for apiserver healthz status ...
	I1003 18:35:54.942998   35522 api_server.go:253] Checking apiserver healthz at https://192.168.39.229:8443/healthz ...
	I1003 18:35:54.948775   35522 api_server.go:279] https://192.168.39.229:8443/healthz returned 200:
	ok
	I1003 18:35:54.949967   35522 api_server.go:141] control plane version: v1.32.0
	I1003 18:35:54.949991   35522 api_server.go:131] duration metric: took 7.008866ms to wait for apiserver health ...
	I1003 18:35:54.950000   35522 system_pods.go:43] waiting for kube-system pods to appear ...
	I1003 18:35:54.953889   35522 system_pods.go:59] 7 kube-system pods found
	I1003 18:35:54.953914   35522 system_pods.go:61] "coredns-668d6bf9bc-d4ks7" [e2b0c6ed-3e09-4914-ab49-ee58fc82324e] Running
	I1003 18:35:54.953919   35522 system_pods.go:61] "etcd-test-preload-442251" [41d7b57b-e1ff-4923-b27a-baae7e8438ba] Running
	I1003 18:35:54.953928   35522 system_pods.go:61] "kube-apiserver-test-preload-442251" [edcf0d40-ea5e-4143-8c35-7edfa63478a1] Running
	I1003 18:35:54.953932   35522 system_pods.go:61] "kube-controller-manager-test-preload-442251" [5ada30cd-3b7b-402f-89b2-7f2618065bc2] Running
	I1003 18:35:54.953939   35522 system_pods.go:61] "kube-proxy-x9wrz" [8c60595a-679c-439d-bcb9-3b302dadf3d2] Running
	I1003 18:35:54.953947   35522 system_pods.go:61] "kube-scheduler-test-preload-442251" [8e109cee-8522-4fd7-bac9-09e674fd4d7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1003 18:35:54.953952   35522 system_pods.go:61] "storage-provisioner" [01343337-d587-4601-bbe7-064a87b89671] Running
	I1003 18:35:54.953964   35522 system_pods.go:74] duration metric: took 3.957144ms to wait for pod list to return data ...
	I1003 18:35:54.953982   35522 default_sa.go:34] waiting for default service account to be created ...
	I1003 18:35:54.956898   35522 default_sa.go:45] found service account: "default"
	I1003 18:35:54.956940   35522 default_sa.go:55] duration metric: took 2.951393ms for default service account to be created ...
	I1003 18:35:54.956949   35522 system_pods.go:116] waiting for k8s-apps to be running ...
	I1003 18:35:54.960214   35522 system_pods.go:86] 7 kube-system pods found
	I1003 18:35:54.960238   35522 system_pods.go:89] "coredns-668d6bf9bc-d4ks7" [e2b0c6ed-3e09-4914-ab49-ee58fc82324e] Running
	I1003 18:35:54.960243   35522 system_pods.go:89] "etcd-test-preload-442251" [41d7b57b-e1ff-4923-b27a-baae7e8438ba] Running
	I1003 18:35:54.960247   35522 system_pods.go:89] "kube-apiserver-test-preload-442251" [edcf0d40-ea5e-4143-8c35-7edfa63478a1] Running
	I1003 18:35:54.960250   35522 system_pods.go:89] "kube-controller-manager-test-preload-442251" [5ada30cd-3b7b-402f-89b2-7f2618065bc2] Running
	I1003 18:35:54.960253   35522 system_pods.go:89] "kube-proxy-x9wrz" [8c60595a-679c-439d-bcb9-3b302dadf3d2] Running
	I1003 18:35:54.960264   35522 system_pods.go:89] "kube-scheduler-test-preload-442251" [8e109cee-8522-4fd7-bac9-09e674fd4d7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1003 18:35:54.960268   35522 system_pods.go:89] "storage-provisioner" [01343337-d587-4601-bbe7-064a87b89671] Running
	I1003 18:35:54.960276   35522 system_pods.go:126] duration metric: took 3.320985ms to wait for k8s-apps to be running ...
	I1003 18:35:54.960284   35522 system_svc.go:44] waiting for kubelet service to be running ....
	I1003 18:35:54.960335   35522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:35:54.978874   35522 system_svc.go:56] duration metric: took 18.576331ms WaitForService to wait for kubelet
	I1003 18:35:54.978911   35522 kubeadm.go:586] duration metric: took 9.36569848s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 18:35:54.978932   35522 node_conditions.go:102] verifying NodePressure condition ...
	I1003 18:35:54.982270   35522 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1003 18:35:54.982301   35522 node_conditions.go:123] node cpu capacity is 2
	I1003 18:35:54.982313   35522 node_conditions.go:105] duration metric: took 3.374991ms to run NodePressure ...
	I1003 18:35:54.982327   35522 start.go:241] waiting for startup goroutines ...
	I1003 18:35:54.982337   35522 start.go:246] waiting for cluster config update ...
	I1003 18:35:54.982348   35522 start.go:255] writing updated cluster config ...
	I1003 18:35:54.982720   35522 ssh_runner.go:195] Run: rm -f paused
	I1003 18:35:54.990333   35522 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 18:35:54.991052   35522 kapi.go:59] client config for test-preload-442251: &rest.Config{Host:"https://192.168.39.229:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21625-8656/.minikube/profiles/test-preload-442251/client.crt", KeyFile:"/home/jenkins/minikube-integration/21625-8656/.minikube/profiles/test-preload-442251/client.key", CAFile:"/home/jenkins/minikube-integration/21625-8656/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c3c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 18:35:54.994500   35522 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-d4ks7" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:35:54.999342   35522 pod_ready.go:94] pod "coredns-668d6bf9bc-d4ks7" is "Ready"
	I1003 18:35:54.999367   35522 pod_ready.go:86] duration metric: took 4.840117ms for pod "coredns-668d6bf9bc-d4ks7" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:35:55.001883   35522 pod_ready.go:83] waiting for pod "etcd-test-preload-442251" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:35:55.007186   35522 pod_ready.go:94] pod "etcd-test-preload-442251" is "Ready"
	I1003 18:35:55.007267   35522 pod_ready.go:86] duration metric: took 5.346672ms for pod "etcd-test-preload-442251" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:35:55.010978   35522 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-442251" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:35:55.015789   35522 pod_ready.go:94] pod "kube-apiserver-test-preload-442251" is "Ready"
	I1003 18:35:55.015823   35522 pod_ready.go:86] duration metric: took 4.814163ms for pod "kube-apiserver-test-preload-442251" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:35:55.018256   35522 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-442251" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:35:55.394651   35522 pod_ready.go:94] pod "kube-controller-manager-test-preload-442251" is "Ready"
	I1003 18:35:55.394677   35522 pod_ready.go:86] duration metric: took 376.389475ms for pod "kube-controller-manager-test-preload-442251" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:35:55.595543   35522 pod_ready.go:83] waiting for pod "kube-proxy-x9wrz" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:35:55.995187   35522 pod_ready.go:94] pod "kube-proxy-x9wrz" is "Ready"
	I1003 18:35:55.995214   35522 pod_ready.go:86] duration metric: took 399.634879ms for pod "kube-proxy-x9wrz" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:35:56.194646   35522 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-442251" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:35:57.795193   35522 pod_ready.go:94] pod "kube-scheduler-test-preload-442251" is "Ready"
	I1003 18:35:57.795221   35522 pod_ready.go:86] duration metric: took 1.600541062s for pod "kube-scheduler-test-preload-442251" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:35:57.795236   35522 pod_ready.go:40] duration metric: took 2.804861625s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 18:35:57.839426   35522 start.go:623] kubectl: 1.34.1, cluster: 1.32.0 (minor skew: 2)
	I1003 18:35:57.841422   35522 out.go:203] 
	W1003 18:35:57.842961   35522 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.32.0.
	I1003 18:35:57.844441   35522 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1003 18:35:57.846153   35522 out.go:179] * Done! kubectl is now configured to use "test-preload-442251" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 03 18:35:58 test-preload-442251 crio[832]: time="2025-10-03 18:35:58.691073341Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=10330705-2082-406a-8267-711ab4e2db8c name=/runtime.v1.RuntimeService/Version
	Oct 03 18:35:58 test-preload-442251 crio[832]: time="2025-10-03 18:35:58.693952869Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2d185b1f-3b0a-4454-9522-803e9895ab02 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 03 18:35:58 test-preload-442251 crio[832]: time="2025-10-03 18:35:58.694410054Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759516558694388856,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d185b1f-3b0a-4454-9522-803e9895ab02 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 03 18:35:58 test-preload-442251 crio[832]: time="2025-10-03 18:35:58.695613712Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2bc48fea-1ab3-4b5f-81f6-92b543a6f578 name=/runtime.v1.RuntimeService/ListContainers
	Oct 03 18:35:58 test-preload-442251 crio[832]: time="2025-10-03 18:35:58.695668912Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2bc48fea-1ab3-4b5f-81f6-92b543a6f578 name=/runtime.v1.RuntimeService/ListContainers
	Oct 03 18:35:58 test-preload-442251 crio[832]: time="2025-10-03 18:35:58.695822162Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:74dcc7ab1f7441b0a46982ba249a988ffa238734fb9c31cc8104282083cd530b,PodSandboxId:0875a2e9de4bf1524ad797dee1bcde3aaef2042163a6daa59a5e6e80be542212,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1759516552653989169,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-d4ks7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b0c6ed-3e09-4914-ab49-ee58fc82324e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6884a498d212409df6d2fb73b2c00c734c3f95c1f46f5b077372ade277827ee,PodSandboxId:8a02bc7a07c30e4797a519ef7bb9f4bf309f9514eb97f275fc70b3d5ffb3119f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1759516545208403584,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x9wrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 8c60595a-679c-439d-bcb9-3b302dadf3d2,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:960506b43f3831a343ecaf3aeb191b5ccb40f2dadad05550f3187d1f82103fb7,PodSandboxId:88cd4e7904be6895b01f498e8155dbe38ed1aebaa1a0977502da1a2d1866845f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759516545047882970,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01
343337-d587-4601-bbe7-064a87b89671,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70c1651276681b0d0b7c55ce77da10d6b77f6636c52fdd2ffc65caa1252e41c4,PodSandboxId:8163f89643a569182e0a450981b812e80ab56db7a8793cdd48b993ca2860978d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1759516540833860870,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-442251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ecf0cbde607e25c29b9a9af6b4ef153,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f1af4e17b289b63250d2b1dd6e37bdc3a75c94631fd0c0f7307e078640e9b91,PodSandboxId:f668dbeb99344607090f1f11ee0b18f28e4cda5fc5a4f75c1a6ad14503ab9826,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1759516540851437319,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-442251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b254f41021ec5926fb47d389cb8801a9,},Annotations:map
[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a47a8de1653dcd2684fe469e0c7daed1e3ac6c466f4f8b6bc3ea5e944bd563f,PodSandboxId:07a0e10d05372fd3a7696e1fa9a12756078a0f09a0c608f6af6fda17006180fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1759516540810724415,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-442251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0cc507b1d08078d9186ef93f52e0656,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e00f1adc2e55c396f181bc7e2488884556f5782a259e5cb34dda519f06e45c9f,PodSandboxId:1966b66e6ebe5f816337fc79e7e62a3f8916958579f93af3806cd0ac97df0114,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1759516540782336329,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-442251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6974da0b518871c0350c30a2dda84dd8,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2bc48fea-1ab3-4b5f-81f6-92b543a6f578 name=/runtime.v1.RuntimeService/ListContainers
	Oct 03 18:35:58 test-preload-442251 crio[832]: time="2025-10-03 18:35:58.737946942Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=43840290-b867-4d73-892d-c2135ad6ac9c name=/runtime.v1.RuntimeService/Version
	Oct 03 18:35:58 test-preload-442251 crio[832]: time="2025-10-03 18:35:58.738060088Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=43840290-b867-4d73-892d-c2135ad6ac9c name=/runtime.v1.RuntimeService/Version
	Oct 03 18:35:58 test-preload-442251 crio[832]: time="2025-10-03 18:35:58.739355289Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4d218d6a-bb99-4ef5-8746-540422f571f8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 03 18:35:58 test-preload-442251 crio[832]: time="2025-10-03 18:35:58.739854894Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759516558739793063,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4d218d6a-bb99-4ef5-8746-540422f571f8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 03 18:35:58 test-preload-442251 crio[832]: time="2025-10-03 18:35:58.740642295Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8da45c4f-77fe-4381-9911-855abe5c2acc name=/runtime.v1.RuntimeService/ListContainers
	Oct 03 18:35:58 test-preload-442251 crio[832]: time="2025-10-03 18:35:58.740768162Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8da45c4f-77fe-4381-9911-855abe5c2acc name=/runtime.v1.RuntimeService/ListContainers
	Oct 03 18:35:58 test-preload-442251 crio[832]: time="2025-10-03 18:35:58.741326836Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:74dcc7ab1f7441b0a46982ba249a988ffa238734fb9c31cc8104282083cd530b,PodSandboxId:0875a2e9de4bf1524ad797dee1bcde3aaef2042163a6daa59a5e6e80be542212,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1759516552653989169,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-d4ks7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b0c6ed-3e09-4914-ab49-ee58fc82324e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6884a498d212409df6d2fb73b2c00c734c3f95c1f46f5b077372ade277827ee,PodSandboxId:8a02bc7a07c30e4797a519ef7bb9f4bf309f9514eb97f275fc70b3d5ffb3119f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1759516545208403584,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x9wrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 8c60595a-679c-439d-bcb9-3b302dadf3d2,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:960506b43f3831a343ecaf3aeb191b5ccb40f2dadad05550f3187d1f82103fb7,PodSandboxId:88cd4e7904be6895b01f498e8155dbe38ed1aebaa1a0977502da1a2d1866845f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759516545047882970,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01
343337-d587-4601-bbe7-064a87b89671,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70c1651276681b0d0b7c55ce77da10d6b77f6636c52fdd2ffc65caa1252e41c4,PodSandboxId:8163f89643a569182e0a450981b812e80ab56db7a8793cdd48b993ca2860978d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1759516540833860870,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-442251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ecf0cbde607e25c29b9a9af6b4ef153,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f1af4e17b289b63250d2b1dd6e37bdc3a75c94631fd0c0f7307e078640e9b91,PodSandboxId:f668dbeb99344607090f1f11ee0b18f28e4cda5fc5a4f75c1a6ad14503ab9826,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1759516540851437319,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-442251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b254f41021ec5926fb47d389cb8801a9,},Annotations:map
[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a47a8de1653dcd2684fe469e0c7daed1e3ac6c466f4f8b6bc3ea5e944bd563f,PodSandboxId:07a0e10d05372fd3a7696e1fa9a12756078a0f09a0c608f6af6fda17006180fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1759516540810724415,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-442251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0cc507b1d08078d9186ef93f52e0656,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e00f1adc2e55c396f181bc7e2488884556f5782a259e5cb34dda519f06e45c9f,PodSandboxId:1966b66e6ebe5f816337fc79e7e62a3f8916958579f93af3806cd0ac97df0114,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1759516540782336329,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-442251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6974da0b518871c0350c30a2dda84dd8,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8da45c4f-77fe-4381-9911-855abe5c2acc name=/runtime.v1.RuntimeService/ListContainers
	Oct 03 18:35:58 test-preload-442251 crio[832]: time="2025-10-03 18:35:58.781614628Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3036c76c-80c3-4555-b532-5a4f0b68c1d3 name=/runtime.v1.RuntimeService/Version
	Oct 03 18:35:58 test-preload-442251 crio[832]: time="2025-10-03 18:35:58.781725057Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3036c76c-80c3-4555-b532-5a4f0b68c1d3 name=/runtime.v1.RuntimeService/Version
	Oct 03 18:35:58 test-preload-442251 crio[832]: time="2025-10-03 18:35:58.783110033Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3e7dc27e-f098-41fc-9776-2d70153c7bc0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 03 18:35:58 test-preload-442251 crio[832]: time="2025-10-03 18:35:58.783620221Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759516558783595341,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3e7dc27e-f098-41fc-9776-2d70153c7bc0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 03 18:35:58 test-preload-442251 crio[832]: time="2025-10-03 18:35:58.784485638Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2e6ef0a1-39af-4885-a318-b0f3c3471735 name=/runtime.v1.RuntimeService/ListContainers
	Oct 03 18:35:58 test-preload-442251 crio[832]: time="2025-10-03 18:35:58.784566491Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2e6ef0a1-39af-4885-a318-b0f3c3471735 name=/runtime.v1.RuntimeService/ListContainers
	Oct 03 18:35:58 test-preload-442251 crio[832]: time="2025-10-03 18:35:58.784761093Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:74dcc7ab1f7441b0a46982ba249a988ffa238734fb9c31cc8104282083cd530b,PodSandboxId:0875a2e9de4bf1524ad797dee1bcde3aaef2042163a6daa59a5e6e80be542212,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1759516552653989169,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-d4ks7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b0c6ed-3e09-4914-ab49-ee58fc82324e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6884a498d212409df6d2fb73b2c00c734c3f95c1f46f5b077372ade277827ee,PodSandboxId:8a02bc7a07c30e4797a519ef7bb9f4bf309f9514eb97f275fc70b3d5ffb3119f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1759516545208403584,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x9wrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 8c60595a-679c-439d-bcb9-3b302dadf3d2,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:960506b43f3831a343ecaf3aeb191b5ccb40f2dadad05550f3187d1f82103fb7,PodSandboxId:88cd4e7904be6895b01f498e8155dbe38ed1aebaa1a0977502da1a2d1866845f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759516545047882970,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01
343337-d587-4601-bbe7-064a87b89671,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70c1651276681b0d0b7c55ce77da10d6b77f6636c52fdd2ffc65caa1252e41c4,PodSandboxId:8163f89643a569182e0a450981b812e80ab56db7a8793cdd48b993ca2860978d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1759516540833860870,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-442251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ecf0cbde607e25c29b9a9af6b4ef153,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f1af4e17b289b63250d2b1dd6e37bdc3a75c94631fd0c0f7307e078640e9b91,PodSandboxId:f668dbeb99344607090f1f11ee0b18f28e4cda5fc5a4f75c1a6ad14503ab9826,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1759516540851437319,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-442251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b254f41021ec5926fb47d389cb8801a9,},Annotations:map
[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a47a8de1653dcd2684fe469e0c7daed1e3ac6c466f4f8b6bc3ea5e944bd563f,PodSandboxId:07a0e10d05372fd3a7696e1fa9a12756078a0f09a0c608f6af6fda17006180fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1759516540810724415,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-442251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0cc507b1d08078d9186ef93f52e0656,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e00f1adc2e55c396f181bc7e2488884556f5782a259e5cb34dda519f06e45c9f,PodSandboxId:1966b66e6ebe5f816337fc79e7e62a3f8916958579f93af3806cd0ac97df0114,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1759516540782336329,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-442251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6974da0b518871c0350c30a2dda84dd8,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2e6ef0a1-39af-4885-a318-b0f3c3471735 name=/runtime.v1.RuntimeService/ListContainers
	Oct 03 18:35:58 test-preload-442251 crio[832]: time="2025-10-03 18:35:58.808616703Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=68b433cc-ba13-4ee3-8b2d-d4862d2fa020 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 03 18:35:58 test-preload-442251 crio[832]: time="2025-10-03 18:35:58.808846759Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:0875a2e9de4bf1524ad797dee1bcde3aaef2042163a6daa59a5e6e80be542212,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-d4ks7,Uid:e2b0c6ed-3e09-4914-ab49-ee58fc82324e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759516552430607269,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-d4ks7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b0c6ed-3e09-4914-ab49-ee58fc82324e,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-03T18:35:44.588692429Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8a02bc7a07c30e4797a519ef7bb9f4bf309f9514eb97f275fc70b3d5ffb3119f,Metadata:&PodSandboxMetadata{Name:kube-proxy-x9wrz,Uid:8c60595a-679c-439d-bcb9-3b302dadf3d2,Namespace:kube-system,A
ttempt:0,},State:SANDBOX_READY,CreatedAt:1759516544902722013,Labels:map[string]string{controller-revision-hash: 64b9dbc74b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-x9wrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c60595a-679c-439d-bcb9-3b302dadf3d2,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-03T18:35:44.588731875Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:88cd4e7904be6895b01f498e8155dbe38ed1aebaa1a0977502da1a2d1866845f,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:01343337-d587-4601-bbe7-064a87b89671,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759516544901746663,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01343337-d587-4601-bbe7-064a
87b89671,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-10-03T18:35:44.588734559Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1966b66e6ebe5f816337fc79e7e62a3f8916958579f93af3806cd0ac97df0114,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-442251,Uid:6974da0
b518871c0350c30a2dda84dd8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759516540549213476,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-442251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6974da0b518871c0350c30a2dda84dd8,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6974da0b518871c0350c30a2dda84dd8,kubernetes.io/config.seen: 2025-10-03T18:35:38.587560995Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:07a0e10d05372fd3a7696e1fa9a12756078a0f09a0c608f6af6fda17006180fa,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-442251,Uid:c0cc507b1d08078d9186ef93f52e0656,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759516540548776166,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-442251,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0cc507b1d08078d9186ef93f52e0656,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c0cc507b1d08078d9186ef93f52e0656,kubernetes.io/config.seen: 2025-10-03T18:35:38.587559812Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8163f89643a569182e0a450981b812e80ab56db7a8793cdd48b993ca2860978d,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-442251,Uid:8ecf0cbde607e25c29b9a9af6b4ef153,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759516540547755874,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-442251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ecf0cbde607e25c29b9a9af6b4ef153,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.229:2379,kubernetes.io/config.hash: 8ecf0cbde607e25c29b9a9af6b4ef153,kubernetes.io/config.seen: 2025-10-03T18
:35:38.643152892Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f668dbeb99344607090f1f11ee0b18f28e4cda5fc5a4f75c1a6ad14503ab9826,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-442251,Uid:b254f41021ec5926fb47d389cb8801a9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759516540545016511,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-442251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b254f41021ec5926fb47d389cb8801a9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.229:8443,kubernetes.io/config.hash: b254f41021ec5926fb47d389cb8801a9,kubernetes.io/config.seen: 2025-10-03T18:35:38.587556071Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=68b433cc-ba13-4ee3-8b2d-d4862d2fa020 name=/runtime.v1.RuntimeService/ListPodSandbox

                                                
                                                
	Oct 03 18:35:58 test-preload-442251 crio[832]: time="2025-10-03 18:35:58.810367515Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d82ec0f8-a0a5-4032-8fe0-953c58cb1314 name=/runtime.v1.RuntimeService/ListContainers
	Oct 03 18:35:58 test-preload-442251 crio[832]: time="2025-10-03 18:35:58.810443471Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d82ec0f8-a0a5-4032-8fe0-953c58cb1314 name=/runtime.v1.RuntimeService/ListContainers
	Oct 03 18:35:58 test-preload-442251 crio[832]: time="2025-10-03 18:35:58.810639582Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:74dcc7ab1f7441b0a46982ba249a988ffa238734fb9c31cc8104282083cd530b,PodSandboxId:0875a2e9de4bf1524ad797dee1bcde3aaef2042163a6daa59a5e6e80be542212,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1759516552653989169,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-d4ks7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b0c6ed-3e09-4914-ab49-ee58fc82324e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6884a498d212409df6d2fb73b2c00c734c3f95c1f46f5b077372ade277827ee,PodSandboxId:8a02bc7a07c30e4797a519ef7bb9f4bf309f9514eb97f275fc70b3d5ffb3119f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1759516545208403584,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x9wrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 8c60595a-679c-439d-bcb9-3b302dadf3d2,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:960506b43f3831a343ecaf3aeb191b5ccb40f2dadad05550f3187d1f82103fb7,PodSandboxId:88cd4e7904be6895b01f498e8155dbe38ed1aebaa1a0977502da1a2d1866845f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759516545047882970,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01
343337-d587-4601-bbe7-064a87b89671,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70c1651276681b0d0b7c55ce77da10d6b77f6636c52fdd2ffc65caa1252e41c4,PodSandboxId:8163f89643a569182e0a450981b812e80ab56db7a8793cdd48b993ca2860978d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1759516540833860870,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-442251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ecf0cbde607e25c29b9a9af6b4ef153,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f1af4e17b289b63250d2b1dd6e37bdc3a75c94631fd0c0f7307e078640e9b91,PodSandboxId:f668dbeb99344607090f1f11ee0b18f28e4cda5fc5a4f75c1a6ad14503ab9826,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1759516540851437319,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-442251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b254f41021ec5926fb47d389cb8801a9,},Annotations:map
[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a47a8de1653dcd2684fe469e0c7daed1e3ac6c466f4f8b6bc3ea5e944bd563f,PodSandboxId:07a0e10d05372fd3a7696e1fa9a12756078a0f09a0c608f6af6fda17006180fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1759516540810724415,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-442251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0cc507b1d08078d9186ef93f52e0656,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e00f1adc2e55c396f181bc7e2488884556f5782a259e5cb34dda519f06e45c9f,PodSandboxId:1966b66e6ebe5f816337fc79e7e62a3f8916958579f93af3806cd0ac97df0114,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1759516540782336329,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-442251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6974da0b518871c0350c30a2dda84dd8,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d82ec0f8-a0a5-4032-8fe0-953c58cb1314 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	74dcc7ab1f744       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   6 seconds ago       Running             coredns                   1                   0875a2e9de4bf       coredns-668d6bf9bc-d4ks7
	b6884a498d212       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   13 seconds ago      Running             kube-proxy                1                   8a02bc7a07c30       kube-proxy-x9wrz
	960506b43f383       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 seconds ago      Running             storage-provisioner       2                   88cd4e7904be6       storage-provisioner
	1f1af4e17b289       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   18 seconds ago      Running             kube-apiserver            1                   f668dbeb99344       kube-apiserver-test-preload-442251
	70c1651276681       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   18 seconds ago      Running             etcd                      1                   8163f89643a56       etcd-test-preload-442251
	9a47a8de1653d       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   18 seconds ago      Running             kube-controller-manager   1                   07a0e10d05372       kube-controller-manager-test-preload-442251
	e00f1adc2e55c       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   18 seconds ago      Running             kube-scheduler            1                   1966b66e6ebe5       kube-scheduler-test-preload-442251
	
	
	==> coredns [74dcc7ab1f7441b0a46982ba249a988ffa238734fb9c31cc8104282083cd530b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:39687 - 6527 "HINFO IN 6090111365990438584.1692222119932893932. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.068314664s
	
	
	==> describe nodes <==
	Name:               test-preload-442251
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-442251
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a43873c79fc22f8b1ccd29d3dfa635d392b09335
	                    minikube.k8s.io/name=test-preload-442251
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_03T18_34_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 03 Oct 2025 18:34:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-442251
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 03 Oct 2025 18:35:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 03 Oct 2025 18:35:54 +0000   Fri, 03 Oct 2025 18:34:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 03 Oct 2025 18:35:54 +0000   Fri, 03 Oct 2025 18:34:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 03 Oct 2025 18:35:54 +0000   Fri, 03 Oct 2025 18:34:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 03 Oct 2025 18:35:54 +0000   Fri, 03 Oct 2025 18:35:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.229
	  Hostname:    test-preload-442251
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042704Ki
	  pods:               110
	System Info:
	  Machine ID:                 1e1efe20120d45fa848bef40a6cc8d1f
	  System UUID:                1e1efe20-120d-45fa-848b-ef40a6cc8d1f
	  Boot ID:                    41ad5a93-4d0a-4611-9f3d-b1d0e81d6918
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-d4ks7                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     104s
	  kube-system                 etcd-test-preload-442251                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         108s
	  kube-system                 kube-apiserver-test-preload-442251             250m (12%)    0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-test-preload-442251    200m (10%)    0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-proxy-x9wrz                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-test-preload-442251             100m (5%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 102s               kube-proxy       
	  Normal   Starting                 13s                kube-proxy       
	  Normal   Starting                 109s               kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  108s               kubelet          Node test-preload-442251 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    108s               kubelet          Node test-preload-442251 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     108s               kubelet          Node test-preload-442251 status is now: NodeHasSufficientPID
	  Normal   NodeReady                108s               kubelet          Node test-preload-442251 status is now: NodeReady
	  Normal   NodeAllocatableEnforced  108s               kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           105s               node-controller  Node test-preload-442251 event: Registered Node test-preload-442251 in Controller
	  Normal   Starting                 21s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node test-preload-442251 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node test-preload-442251 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21s (x7 over 21s)  kubelet          Node test-preload-442251 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 16s                kubelet          Node test-preload-442251 has been rebooted, boot id: 41ad5a93-4d0a-4611-9f3d-b1d0e81d6918
	  Normal   RegisteredNode           12s                node-controller  Node test-preload-442251 event: Registered Node test-preload-442251 in Controller
	
	
	==> dmesg <==
	[Oct 3 18:35] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000051] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000529] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.952575] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.107227] kauditd_printk_skb: 88 callbacks suppressed
	[  +6.615072] kauditd_printk_skb: 205 callbacks suppressed
	[  +0.000348] kauditd_printk_skb: 128 callbacks suppressed
	[  +0.023379] kauditd_printk_skb: 65 callbacks suppressed
	
	
	==> etcd [70c1651276681b0d0b7c55ce77da10d6b77f6636c52fdd2ffc65caa1252e41c4] <==
	{"level":"info","ts":"2025-10-03T18:35:41.286667Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2bfbf13ce68722b","local-member-id":"b8647f2870156d71","added-peer-id":"b8647f2870156d71","added-peer-peer-urls":["https://192.168.39.229:2380"]}
	{"level":"info","ts":"2025-10-03T18:35:41.286770Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2bfbf13ce68722b","local-member-id":"b8647f2870156d71","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-03T18:35:41.286808Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-03T18:35:41.294248Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-03T18:35:41.295836Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-03T18:35:41.302210Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"b8647f2870156d71","initial-advertise-peer-urls":["https://192.168.39.229:2380"],"listen-peer-urls":["https://192.168.39.229:2380"],"advertise-client-urls":["https://192.168.39.229:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.229:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-03T18:35:41.303117Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-03T18:35:41.303312Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.229:2380"}
	{"level":"info","ts":"2025-10-03T18:35:41.303345Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.229:2380"}
	{"level":"info","ts":"2025-10-03T18:35:42.648107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-03T18:35:42.648160Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-03T18:35:42.648194Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 received MsgPreVoteResp from b8647f2870156d71 at term 2"}
	{"level":"info","ts":"2025-10-03T18:35:42.648207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 became candidate at term 3"}
	{"level":"info","ts":"2025-10-03T18:35:42.648213Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 received MsgVoteResp from b8647f2870156d71 at term 3"}
	{"level":"info","ts":"2025-10-03T18:35:42.648221Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 became leader at term 3"}
	{"level":"info","ts":"2025-10-03T18:35:42.648228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b8647f2870156d71 elected leader b8647f2870156d71 at term 3"}
	{"level":"info","ts":"2025-10-03T18:35:42.649935Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"b8647f2870156d71","local-member-attributes":"{Name:test-preload-442251 ClientURLs:[https://192.168.39.229:2379]}","request-path":"/0/members/b8647f2870156d71/attributes","cluster-id":"2bfbf13ce68722b","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-03T18:35:42.650143Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-03T18:35:42.650125Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-03T18:35:42.650410Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-03T18:35:42.650425Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-03T18:35:42.650990Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-03T18:35:42.650997Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-03T18:35:42.651713Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.229:2379"}
	{"level":"info","ts":"2025-10-03T18:35:42.651741Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 18:35:59 up 0 min,  0 users,  load average: 0.96, 0.25, 0.09
	Linux test-preload-442251 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [1f1af4e17b289b63250d2b1dd6e37bdc3a75c94631fd0c0f7307e078640e9b91] <==
	I1003 18:35:43.955540       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1003 18:35:43.955583       1 policy_source.go:240] refreshing policies
	I1003 18:35:43.958319       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1003 18:35:43.958852       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1003 18:35:43.958923       1 shared_informer.go:320] Caches are synced for configmaps
	I1003 18:35:43.959125       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1003 18:35:43.959149       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1003 18:35:43.959271       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1003 18:35:43.959910       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1003 18:35:43.959964       1 aggregator.go:171] initial CRD sync complete...
	I1003 18:35:43.959970       1 autoregister_controller.go:144] Starting autoregister controller
	I1003 18:35:43.959975       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1003 18:35:43.959979       1 cache.go:39] Caches are synced for autoregister controller
	I1003 18:35:43.963263       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1003 18:35:43.966877       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	E1003 18:35:43.970959       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1003 18:35:44.663147       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1003 18:35:44.767500       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1003 18:35:45.405244       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1003 18:35:45.451986       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1003 18:35:45.499363       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1003 18:35:45.509525       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1003 18:35:47.123962       1 controller.go:615] quota admission added evaluator for: endpoints
	I1003 18:35:47.413103       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1003 18:35:47.466175       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [9a47a8de1653dcd2684fe469e0c7daed1e3ac6c466f4f8b6bc3ea5e944bd563f] <==
	I1003 18:35:47.067832       1 shared_informer.go:320] Caches are synced for resource quota
	I1003 18:35:47.068087       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1003 18:35:47.068254       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1003 18:35:47.068278       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1003 18:35:47.068293       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1003 18:35:47.068584       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-442251"
	I1003 18:35:47.070737       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1003 18:35:47.075168       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1003 18:35:47.076578       1 shared_informer.go:320] Caches are synced for resource quota
	I1003 18:35:47.078316       1 shared_informer.go:320] Caches are synced for job
	I1003 18:35:47.081516       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1003 18:35:47.082081       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I1003 18:35:47.082146       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I1003 18:35:47.082491       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1003 18:35:47.084485       1 shared_informer.go:320] Caches are synced for PVC protection
	I1003 18:35:47.097385       1 shared_informer.go:320] Caches are synced for garbage collector
	I1003 18:35:47.097560       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-442251"
	I1003 18:35:47.474720       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="433.738108ms"
	I1003 18:35:47.475121       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="201.617µs"
	I1003 18:35:52.808094       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="115.476µs"
	I1003 18:35:52.852793       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="24.211084ms"
	I1003 18:35:52.853736       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="88.596µs"
	I1003 18:35:54.421432       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-442251"
	I1003 18:35:54.436920       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-442251"
	I1003 18:35:57.063687       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [b6884a498d212409df6d2fb73b2c00c734c3f95c1f46f5b077372ade277827ee] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1003 18:35:45.448742       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1003 18:35:45.463780       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.229"]
	E1003 18:35:45.463860       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1003 18:35:45.523625       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1003 18:35:45.523726       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1003 18:35:45.523765       1 server_linux.go:170] "Using iptables Proxier"
	I1003 18:35:45.526681       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1003 18:35:45.527133       1 server.go:497] "Version info" version="v1.32.0"
	I1003 18:35:45.527230       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 18:35:45.528769       1 config.go:199] "Starting service config controller"
	I1003 18:35:45.528830       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1003 18:35:45.528874       1 config.go:105] "Starting endpoint slice config controller"
	I1003 18:35:45.528891       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1003 18:35:45.529534       1 config.go:329] "Starting node config controller"
	I1003 18:35:45.529574       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1003 18:35:45.629849       1 shared_informer.go:320] Caches are synced for node config
	I1003 18:35:45.629934       1 shared_informer.go:320] Caches are synced for service config
	I1003 18:35:45.629945       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [e00f1adc2e55c396f181bc7e2488884556f5782a259e5cb34dda519f06e45c9f] <==
	I1003 18:35:42.131250       1 serving.go:386] Generated self-signed cert in-memory
	W1003 18:35:43.829656       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1003 18:35:43.829695       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1003 18:35:43.829705       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1003 18:35:43.829715       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1003 18:35:43.938794       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1003 18:35:43.938932       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 18:35:43.942180       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 18:35:43.942321       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1003 18:35:43.943640       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1003 18:35:43.943755       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1003 18:35:44.043013       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 03 18:35:44 test-preload-442251 kubelet[1152]: E1003 18:35:44.066467    1152 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-442251\" already exists" pod="kube-system/kube-scheduler-test-preload-442251"
	Oct 03 18:35:44 test-preload-442251 kubelet[1152]: I1003 18:35:44.066499    1152 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-442251"
	Oct 03 18:35:44 test-preload-442251 kubelet[1152]: E1003 18:35:44.076895    1152 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-442251\" already exists" pod="kube-system/etcd-test-preload-442251"
	Oct 03 18:35:44 test-preload-442251 kubelet[1152]: I1003 18:35:44.076925    1152 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-test-preload-442251"
	Oct 03 18:35:44 test-preload-442251 kubelet[1152]: E1003 18:35:44.087722    1152 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-442251\" already exists" pod="kube-system/kube-apiserver-test-preload-442251"
	Oct 03 18:35:44 test-preload-442251 kubelet[1152]: I1003 18:35:44.585661    1152 apiserver.go:52] "Watching apiserver"
	Oct 03 18:35:44 test-preload-442251 kubelet[1152]: E1003 18:35:44.591325    1152 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-d4ks7" podUID="e2b0c6ed-3e09-4914-ab49-ee58fc82324e"
	Oct 03 18:35:44 test-preload-442251 kubelet[1152]: I1003 18:35:44.600222    1152 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Oct 03 18:35:44 test-preload-442251 kubelet[1152]: I1003 18:35:44.649504    1152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c60595a-679c-439d-bcb9-3b302dadf3d2-xtables-lock\") pod \"kube-proxy-x9wrz\" (UID: \"8c60595a-679c-439d-bcb9-3b302dadf3d2\") " pod="kube-system/kube-proxy-x9wrz"
	Oct 03 18:35:44 test-preload-442251 kubelet[1152]: I1003 18:35:44.649647    1152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c60595a-679c-439d-bcb9-3b302dadf3d2-lib-modules\") pod \"kube-proxy-x9wrz\" (UID: \"8c60595a-679c-439d-bcb9-3b302dadf3d2\") " pod="kube-system/kube-proxy-x9wrz"
	Oct 03 18:35:44 test-preload-442251 kubelet[1152]: I1003 18:35:44.649688    1152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/01343337-d587-4601-bbe7-064a87b89671-tmp\") pod \"storage-provisioner\" (UID: \"01343337-d587-4601-bbe7-064a87b89671\") " pod="kube-system/storage-provisioner"
	Oct 03 18:35:44 test-preload-442251 kubelet[1152]: E1003 18:35:44.649755    1152 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 03 18:35:44 test-preload-442251 kubelet[1152]: E1003 18:35:44.649817    1152 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e2b0c6ed-3e09-4914-ab49-ee58fc82324e-config-volume podName:e2b0c6ed-3e09-4914-ab49-ee58fc82324e nodeName:}" failed. No retries permitted until 2025-10-03 18:35:45.149797861 +0000 UTC m=+6.670329302 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e2b0c6ed-3e09-4914-ab49-ee58fc82324e-config-volume") pod "coredns-668d6bf9bc-d4ks7" (UID: "e2b0c6ed-3e09-4914-ab49-ee58fc82324e") : object "kube-system"/"coredns" not registered
	Oct 03 18:35:45 test-preload-442251 kubelet[1152]: E1003 18:35:45.153599    1152 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 03 18:35:45 test-preload-442251 kubelet[1152]: E1003 18:35:45.153666    1152 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e2b0c6ed-3e09-4914-ab49-ee58fc82324e-config-volume podName:e2b0c6ed-3e09-4914-ab49-ee58fc82324e nodeName:}" failed. No retries permitted until 2025-10-03 18:35:46.153652437 +0000 UTC m=+7.674183888 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e2b0c6ed-3e09-4914-ab49-ee58fc82324e-config-volume") pod "coredns-668d6bf9bc-d4ks7" (UID: "e2b0c6ed-3e09-4914-ab49-ee58fc82324e") : object "kube-system"/"coredns" not registered
	Oct 03 18:35:46 test-preload-442251 kubelet[1152]: E1003 18:35:46.163352    1152 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 03 18:35:46 test-preload-442251 kubelet[1152]: E1003 18:35:46.163546    1152 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e2b0c6ed-3e09-4914-ab49-ee58fc82324e-config-volume podName:e2b0c6ed-3e09-4914-ab49-ee58fc82324e nodeName:}" failed. No retries permitted until 2025-10-03 18:35:48.163519628 +0000 UTC m=+9.684051068 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e2b0c6ed-3e09-4914-ab49-ee58fc82324e-config-volume") pod "coredns-668d6bf9bc-d4ks7" (UID: "e2b0c6ed-3e09-4914-ab49-ee58fc82324e") : object "kube-system"/"coredns" not registered
	Oct 03 18:35:46 test-preload-442251 kubelet[1152]: E1003 18:35:46.624345    1152 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-d4ks7" podUID="e2b0c6ed-3e09-4914-ab49-ee58fc82324e"
	Oct 03 18:35:48 test-preload-442251 kubelet[1152]: E1003 18:35:48.179432    1152 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 03 18:35:48 test-preload-442251 kubelet[1152]: E1003 18:35:48.179518    1152 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e2b0c6ed-3e09-4914-ab49-ee58fc82324e-config-volume podName:e2b0c6ed-3e09-4914-ab49-ee58fc82324e nodeName:}" failed. No retries permitted until 2025-10-03 18:35:52.179504601 +0000 UTC m=+13.700036052 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e2b0c6ed-3e09-4914-ab49-ee58fc82324e-config-volume") pod "coredns-668d6bf9bc-d4ks7" (UID: "e2b0c6ed-3e09-4914-ab49-ee58fc82324e") : object "kube-system"/"coredns" not registered
	Oct 03 18:35:48 test-preload-442251 kubelet[1152]: E1003 18:35:48.627410    1152 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-d4ks7" podUID="e2b0c6ed-3e09-4914-ab49-ee58fc82324e"
	Oct 03 18:35:48 test-preload-442251 kubelet[1152]: E1003 18:35:48.678184    1152 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759516548677689776,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 03 18:35:48 test-preload-442251 kubelet[1152]: E1003 18:35:48.678251    1152 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759516548677689776,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 03 18:35:58 test-preload-442251 kubelet[1152]: E1003 18:35:58.680222    1152 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759516558679848817,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 03 18:35:58 test-preload-442251 kubelet[1152]: E1003 18:35:58.680254    1152 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759516558679848817,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [960506b43f3831a343ecaf3aeb191b5ccb40f2dadad05550f3187d1f82103fb7] <==
	I1003 18:35:45.196210       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-442251 -n test-preload-442251
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-442251 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-442251" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-442251
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-442251: (1.004296174s)
--- FAIL: TestPreload (160.39s)

                                                
                                    

Test pass (287/329)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 24.14
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.16
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 11.66
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.49
18 TestDownloadOnly/v1.34.1/DeleteAll 0.16
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.66
22 TestOffline 102.58
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 202.07
31 TestAddons/serial/GCPAuth/Namespaces 0.16
32 TestAddons/serial/GCPAuth/FakeCredentials 10.55
35 TestAddons/parallel/Registry 16.9
36 TestAddons/parallel/RegistryCreds 0.78
38 TestAddons/parallel/InspektorGadget 6.32
39 TestAddons/parallel/MetricsServer 6.23
41 TestAddons/parallel/CSI 46.49
42 TestAddons/parallel/Headlamp 20.75
43 TestAddons/parallel/CloudSpanner 6.65
44 TestAddons/parallel/LocalPath 60.78
45 TestAddons/parallel/NvidiaDevicePlugin 6.94
46 TestAddons/parallel/Yakd 10.81
48 TestAddons/StoppedEnableDisable 85.14
49 TestCertOptions 67.2
50 TestCertExpiration 289.3
52 TestForceSystemdFlag 61.65
53 TestForceSystemdEnv 60.04
58 TestErrorSpam/setup 37.91
59 TestErrorSpam/start 0.35
60 TestErrorSpam/status 0.66
61 TestErrorSpam/pause 1.58
62 TestErrorSpam/unpause 1.8
63 TestErrorSpam/stop 5.5
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 83.85
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 32.79
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.53
75 TestFunctional/serial/CacheCmd/cache/add_local 2.18
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.07
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.19
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.62
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 46.4
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.5
86 TestFunctional/serial/LogsFileCmd 1.51
87 TestFunctional/serial/InvalidService 3.84
89 TestFunctional/parallel/ConfigCmd 0.41
90 TestFunctional/parallel/DashboardCmd 30.17
91 TestFunctional/parallel/DryRun 0.25
92 TestFunctional/parallel/InternationalLanguage 0.12
93 TestFunctional/parallel/StatusCmd 0.81
97 TestFunctional/parallel/ServiceCmdConnect 13.5
98 TestFunctional/parallel/AddonsCmd 0.16
99 TestFunctional/parallel/PersistentVolumeClaim 45.42
101 TestFunctional/parallel/SSHCmd 0.32
102 TestFunctional/parallel/CpCmd 1.11
103 TestFunctional/parallel/MySQL 28.36
104 TestFunctional/parallel/FileSync 0.21
105 TestFunctional/parallel/CertSync 1.08
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.35
113 TestFunctional/parallel/License 0.38
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.2
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.19
120 TestFunctional/parallel/ImageCommands/ImageBuild 3.93
121 TestFunctional/parallel/ImageCommands/Setup 1.81
129 TestFunctional/parallel/Version/short 0.06
130 TestFunctional/parallel/Version/components 0.5
131 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
132 TestFunctional/parallel/MountCmd/any-port 8.21
133 TestFunctional/parallel/ProfileCmd/profile_list 0.31
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.32
135 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.59
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.83
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.72
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.53
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.59
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.84
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.65
142 TestFunctional/parallel/MountCmd/specific-port 1.45
143 TestFunctional/parallel/UpdateContextCmd/no_changes 0.07
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.07
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.07
146 TestFunctional/parallel/MountCmd/VerifyCleanup 1.15
147 TestFunctional/parallel/ServiceCmd/DeployApp 29.17
148 TestFunctional/parallel/ServiceCmd/List 1.21
149 TestFunctional/parallel/ServiceCmd/JSONOutput 1.21
150 TestFunctional/parallel/ServiceCmd/HTTPS 0.25
151 TestFunctional/parallel/ServiceCmd/Format 0.24
152 TestFunctional/parallel/ServiceCmd/URL 0.24
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 234.03
161 TestMultiControlPlane/serial/DeployApp 7.34
162 TestMultiControlPlane/serial/PingHostFromPods 1.35
163 TestMultiControlPlane/serial/AddWorkerNode 47.45
164 TestMultiControlPlane/serial/NodeLabels 0.07
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.7
166 TestMultiControlPlane/serial/CopyFile 10.88
167 TestMultiControlPlane/serial/StopSecondaryNode 74.27
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.52
169 TestMultiControlPlane/serial/RestartSecondaryNode 35.99
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.81
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 368.04
172 TestMultiControlPlane/serial/DeleteSecondaryNode 18.35
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.52
174 TestMultiControlPlane/serial/StopCluster 243.98
175 TestMultiControlPlane/serial/RestartCluster 80.98
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.52
177 TestMultiControlPlane/serial/AddSecondaryNode 76.01
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.69
182 TestJSONOutput/start/Command 77.55
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.74
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.66
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 6.9
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.24
210 TestMainNoArgs 0.06
211 TestMinikubeProfile 80.97
214 TestMountStart/serial/StartWithMountFirst 21.07
215 TestMountStart/serial/VerifyMountFirst 0.3
216 TestMountStart/serial/StartWithMountSecond 20.55
217 TestMountStart/serial/VerifyMountSecond 0.3
218 TestMountStart/serial/DeleteFirst 0.69
219 TestMountStart/serial/VerifyMountPostDelete 0.31
220 TestMountStart/serial/Stop 1.26
221 TestMountStart/serial/RestartStopped 18.15
222 TestMountStart/serial/VerifyMountPostStop 0.3
225 TestMultiNode/serial/FreshStart2Nodes 102.85
226 TestMultiNode/serial/DeployApp2Nodes 5.97
227 TestMultiNode/serial/PingHostFrom2Pods 0.88
228 TestMultiNode/serial/AddNode 46.72
229 TestMultiNode/serial/MultiNodeLabels 0.07
230 TestMultiNode/serial/ProfileList 0.48
231 TestMultiNode/serial/CopyFile 6.13
232 TestMultiNode/serial/StopNode 2.29
233 TestMultiNode/serial/StartAfterStop 41.37
234 TestMultiNode/serial/RestartKeepsNodes 303.04
235 TestMultiNode/serial/DeleteNode 2.68
236 TestMultiNode/serial/StopMultiNode 169.28
237 TestMultiNode/serial/RestartMultiNode 85.35
238 TestMultiNode/serial/ValidateNameConflict 43.17
245 TestScheduledStopUnix 110.11
249 TestRunningBinaryUpgrade 119.62
251 TestKubernetesUpgrade 184.66
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
255 TestNoKubernetes/serial/StartWithK8s 78.34
256 TestNoKubernetes/serial/StartWithStopK8s 27.21
257 TestStoppedBinaryUpgrade/Setup 2.61
258 TestStoppedBinaryUpgrade/Upgrade 99.47
259 TestNoKubernetes/serial/Start 40.68
268 TestPause/serial/Start 111.5
269 TestNoKubernetes/serial/VerifyK8sNotRunning 0.16
270 TestNoKubernetes/serial/ProfileList 0.68
271 TestNoKubernetes/serial/Stop 1.23
272 TestNoKubernetes/serial/StartNoArgs 57.92
273 TestStoppedBinaryUpgrade/MinikubeLogs 1.12
274 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.18
282 TestNetworkPlugins/group/false 4.37
286 TestPause/serial/SecondStartNoReconfiguration 73.97
288 TestStartStop/group/old-k8s-version/serial/FirstStart 116.46
289 TestPause/serial/Pause 0.94
290 TestPause/serial/VerifyStatus 0.23
291 TestPause/serial/Unpause 0.67
292 TestPause/serial/PauseAgain 0.92
293 TestPause/serial/DeletePaused 0.86
294 TestPause/serial/VerifyDeletedResources 1.09
296 TestStartStop/group/no-preload/serial/FirstStart 105
298 TestStartStop/group/embed-certs/serial/FirstStart 99.26
299 TestStartStop/group/old-k8s-version/serial/DeployApp 11.35
300 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.09
301 TestStartStop/group/old-k8s-version/serial/Stop 77.12
302 TestStartStop/group/embed-certs/serial/DeployApp 10.29
303 TestStartStop/group/no-preload/serial/DeployApp 10.3
304 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.02
305 TestStartStop/group/embed-certs/serial/Stop 79.06
306 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.98
307 TestStartStop/group/no-preload/serial/Stop 85.95
308 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.16
309 TestStartStop/group/old-k8s-version/serial/SecondStart 45.21
310 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
311 TestStartStop/group/embed-certs/serial/SecondStart 46.63
313 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 97.79
314 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.15
315 TestStartStop/group/no-preload/serial/SecondStart 89.63
316 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 15.01
317 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
318 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
319 TestStartStop/group/old-k8s-version/serial/Pause 2.97
321 TestStartStop/group/newest-cni/serial/FirstStart 65.82
322 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
323 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
324 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
325 TestStartStop/group/embed-certs/serial/Pause 3.11
326 TestNetworkPlugins/group/auto/Start 106.03
327 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 7.01
328 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.41
329 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.09
330 TestStartStop/group/newest-cni/serial/DeployApp 0
331 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.3
332 TestStartStop/group/newest-cni/serial/Stop 88.19
333 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
334 TestStartStop/group/no-preload/serial/Pause 2.91
335 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.14
336 TestStartStop/group/default-k8s-diff-port/serial/Stop 84.72
337 TestNetworkPlugins/group/kindnet/Start 57.44
338 TestNetworkPlugins/group/auto/KubeletFlags 0.17
339 TestNetworkPlugins/group/auto/NetCatPod 11.3
340 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
341 TestNetworkPlugins/group/kindnet/KubeletFlags 0.17
342 TestNetworkPlugins/group/kindnet/NetCatPod 10.24
343 TestNetworkPlugins/group/auto/DNS 0.17
344 TestNetworkPlugins/group/auto/Localhost 0.13
345 TestNetworkPlugins/group/auto/HairPin 0.13
346 TestNetworkPlugins/group/kindnet/DNS 0.15
347 TestNetworkPlugins/group/kindnet/Localhost 0.13
348 TestNetworkPlugins/group/kindnet/HairPin 0.13
349 TestNetworkPlugins/group/calico/Start 73.92
350 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.16
351 TestStartStop/group/newest-cni/serial/SecondStart 52.59
352 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.45
353 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 78.04
354 TestNetworkPlugins/group/custom-flannel/Start 117.48
355 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
356 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
357 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
358 TestStartStop/group/newest-cni/serial/Pause 3.11
359 TestNetworkPlugins/group/enable-default-cni/Start 74.39
360 TestNetworkPlugins/group/calico/ControllerPod 6.01
361 TestNetworkPlugins/group/calico/KubeletFlags 0.31
362 TestNetworkPlugins/group/calico/NetCatPod 15.05
363 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
364 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
365 TestNetworkPlugins/group/calico/DNS 0.2
366 TestNetworkPlugins/group/calico/Localhost 0.15
367 TestNetworkPlugins/group/calico/HairPin 0.17
368 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
369 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.34
370 TestNetworkPlugins/group/flannel/Start 79.1
371 TestNetworkPlugins/group/bridge/Start 92.37
372 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
373 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.31
374 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
375 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.99
376 TestNetworkPlugins/group/custom-flannel/DNS 0.17
377 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
378 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
382 TestNetworkPlugins/group/flannel/ControllerPod 6.01
383 TestNetworkPlugins/group/flannel/KubeletFlags 0.18
384 TestNetworkPlugins/group/flannel/NetCatPod 9.26
385 TestNetworkPlugins/group/flannel/DNS 0.16
386 TestNetworkPlugins/group/flannel/Localhost 0.12
387 TestNetworkPlugins/group/flannel/HairPin 0.13
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.17
389 TestNetworkPlugins/group/bridge/NetCatPod 11.27
390 TestNetworkPlugins/group/bridge/DNS 0.15
391 TestNetworkPlugins/group/bridge/Localhost 0.12
392 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.28.0/json-events (24.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-041614 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-041614 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (24.136416605s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (24.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1003 17:42:59.583910   12564 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1003 17:42:59.584002   12564 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-8656/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-041614
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-041614: exit status 85 (74.037915ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-041614 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-041614 │ jenkins │ v1.37.0 │ 03 Oct 25 17:42 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 17:42:35
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 17:42:35.499098   12575 out.go:360] Setting OutFile to fd 1 ...
	I1003 17:42:35.499392   12575 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 17:42:35.499403   12575 out.go:374] Setting ErrFile to fd 2...
	I1003 17:42:35.499407   12575 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 17:42:35.499584   12575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8656/.minikube/bin
	W1003 17:42:35.499725   12575 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21625-8656/.minikube/config/config.json: open /home/jenkins/minikube-integration/21625-8656/.minikube/config/config.json: no such file or directory
	I1003 17:42:35.500696   12575 out.go:368] Setting JSON to true
	I1003 17:42:35.501563   12575 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1500,"bootTime":1759511856,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 17:42:35.501656   12575 start.go:140] virtualization: kvm guest
	I1003 17:42:35.503911   12575 out.go:99] [download-only-041614] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1003 17:42:35.504048   12575 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21625-8656/.minikube/cache/preloaded-tarball: no such file or directory
	I1003 17:42:35.504066   12575 notify.go:220] Checking for updates...
	I1003 17:42:35.505687   12575 out.go:171] MINIKUBE_LOCATION=21625
	I1003 17:42:35.506942   12575 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:42:35.508287   12575 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21625-8656/kubeconfig
	I1003 17:42:35.509835   12575 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8656/.minikube
	I1003 17:42:35.511342   12575 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1003 17:42:35.513623   12575 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1003 17:42:35.513887   12575 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 17:42:36.031516   12575 out.go:99] Using the kvm2 driver based on user configuration
	I1003 17:42:36.031581   12575 start.go:304] selected driver: kvm2
	I1003 17:42:36.031614   12575 start.go:924] validating driver "kvm2" against <nil>
	I1003 17:42:36.032020   12575 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1003 17:42:36.032525   12575 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1003 17:42:36.032688   12575 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1003 17:42:36.032715   12575 cni.go:84] Creating CNI manager for ""
	I1003 17:42:36.032789   12575 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1003 17:42:36.032804   12575 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1003 17:42:36.032866   12575 start.go:348] cluster config:
	{Name:download-only-041614 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-041614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 17:42:36.033088   12575 iso.go:125] acquiring lock: {Name:mk4ce219bd5cf5058f69eb8b10ebc9d907f5f7b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:42:36.034752   12575 out.go:99] Downloading VM boot image ...
	I1003 17:42:36.034802   12575 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21625-8656/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1003 17:42:46.017745   12575 out.go:99] Starting "download-only-041614" primary control-plane node in "download-only-041614" cluster
	I1003 17:42:46.017774   12575 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1003 17:42:46.110084   12575 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1003 17:42:46.110113   12575 cache.go:58] Caching tarball of preloaded images
	I1003 17:42:46.110291   12575 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1003 17:42:46.112297   12575 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1003 17:42:46.112323   12575 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1003 17:42:46.210935   12575 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1003 17:42:46.211077   12575 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21625-8656/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-041614 host does not exist
	  To start a cluster, run: "minikube start -p download-only-041614"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-041614
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (11.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-519501 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-519501 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (11.663214975s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (11.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1003 17:43:11.631075   12564 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1003 17:43:11.631112   12564 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-8656/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-519501
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-519501: exit status 85 (493.562194ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-041614 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-041614 │ jenkins │ v1.37.0 │ 03 Oct 25 17:42 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 03 Oct 25 17:42 UTC │ 03 Oct 25 17:42 UTC │
	│ delete  │ -p download-only-041614                                                                                                                                                 │ download-only-041614 │ jenkins │ v1.37.0 │ 03 Oct 25 17:42 UTC │ 03 Oct 25 17:42 UTC │
	│ start   │ -o=json --download-only -p download-only-519501 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-519501 │ jenkins │ v1.37.0 │ 03 Oct 25 17:43 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 17:43:00
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 17:43:00.021862   12843 out.go:360] Setting OutFile to fd 1 ...
	I1003 17:43:00.022009   12843 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 17:43:00.022017   12843 out.go:374] Setting ErrFile to fd 2...
	I1003 17:43:00.022024   12843 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 17:43:00.022234   12843 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8656/.minikube/bin
	I1003 17:43:00.022708   12843 out.go:368] Setting JSON to true
	I1003 17:43:00.023562   12843 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1524,"bootTime":1759511856,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 17:43:00.023616   12843 start.go:140] virtualization: kvm guest
	I1003 17:43:00.025820   12843 out.go:99] [download-only-519501] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1003 17:43:00.026167   12843 notify.go:220] Checking for updates...
	I1003 17:43:00.027643   12843 out.go:171] MINIKUBE_LOCATION=21625
	I1003 17:43:00.029301   12843 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:43:00.030818   12843 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21625-8656/kubeconfig
	I1003 17:43:00.032328   12843 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8656/.minikube
	I1003 17:43:00.033574   12843 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1003 17:43:00.036000   12843 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1003 17:43:00.036208   12843 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 17:43:00.067227   12843 out.go:99] Using the kvm2 driver based on user configuration
	I1003 17:43:00.067269   12843 start.go:304] selected driver: kvm2
	I1003 17:43:00.067282   12843 start.go:924] validating driver "kvm2" against <nil>
	I1003 17:43:00.067599   12843 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1003 17:43:00.068098   12843 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1003 17:43:00.068250   12843 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1003 17:43:00.068275   12843 cni.go:84] Creating CNI manager for ""
	I1003 17:43:00.068336   12843 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1003 17:43:00.068348   12843 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1003 17:43:00.068399   12843 start.go:348] cluster config:
	{Name:download-only-519501 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-519501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 17:43:00.068497   12843 iso.go:125] acquiring lock: {Name:mk4ce219bd5cf5058f69eb8b10ebc9d907f5f7b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 17:43:00.070205   12843 out.go:99] Starting "download-only-519501" primary control-plane node in "download-only-519501" cluster
	I1003 17:43:00.070235   12843 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 17:43:00.223224   12843 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1003 17:43:00.223267   12843 cache.go:58] Caching tarball of preloaded images
	I1003 17:43:00.223447   12843 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 17:43:00.225369   12843 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1003 17:43:00.225392   12843 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1003 17:43:00.320399   12843 preload.go:290] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1003 17:43:00.320448   12843 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/21625-8656/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-519501 host does not exist
	  To start a cluster, run: "minikube start -p download-only-519501"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-519501
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.66s)

                                                
                                                
=== RUN   TestBinaryMirror
I1003 17:43:12.723127   12564 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-473056 --alsologtostderr --binary-mirror http://127.0.0.1:38937 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-473056" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-473056
--- PASS: TestBinaryMirror (0.66s)

                                                
                                    
x
+
TestOffline (102.58s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-680307 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-680307 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m41.60295518s)
helpers_test.go:175: Cleaning up "offline-crio-680307" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-680307
--- PASS: TestOffline (102.58s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-925003
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-925003: exit status 85 (67.142081ms)

                                                
                                                
-- stdout --
	* Profile "addons-925003" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-925003"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-925003
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-925003: exit status 85 (66.122216ms)

                                                
                                                
-- stdout --
	* Profile "addons-925003" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-925003"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (202.07s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-925003 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-925003 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m22.06979662s)
--- PASS: TestAddons/Setup (202.07s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-925003 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-925003 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.55s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-925003 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-925003 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [43d14f0c-4814-4e73-907a-448430154130] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [43d14f0c-4814-4e73-907a-448430154130] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.004077715s
addons_test.go:694: (dbg) Run:  kubectl --context addons-925003 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-925003 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-925003 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.55s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 13.575391ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-w9j9n" [0381ad84-6981-475d-94b6-d8a0c3d4fe30] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.013081174s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-s4dqm" [25650394-d79a-461d-834c-67479b4075e1] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.00465835s
addons_test.go:392: (dbg) Run:  kubectl --context addons-925003 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-925003 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-925003 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.112318047s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-925003 ip
2025/10/03 17:47:11 [DEBUG] GET http://192.168.39.143:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-925003 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.90s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.78s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.289437ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-925003
addons_test.go:332: (dbg) Run:  kubectl --context addons-925003 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-925003 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.78s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.32s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-lncq9" [134a63dc-4395-4735-b025-c7b2826ddd3d] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004230276s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-925003 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.32s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.23s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 13.231992ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-jkgpr" [23388409-c8d5-41f0-a700-8d78be777b5e] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.013142804s
addons_test.go:463: (dbg) Run:  kubectl --context addons-925003 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-925003 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-925003 addons disable metrics-server --alsologtostderr -v=1: (1.124281185s)
--- PASS: TestAddons/parallel/MetricsServer (6.23s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.49s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1003 17:47:16.456951   12564 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1003 17:47:16.463121   12564 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1003 17:47:16.463151   12564 kapi.go:107] duration metric: took 6.20628ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 6.21583ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-925003 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925003 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925003 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925003 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925003 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925003 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925003 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925003 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925003 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925003 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925003 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925003 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-925003 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [e69847b5-65ac-4625-9a2a-5908e049a70e] Pending
helpers_test.go:352: "task-pv-pod" [e69847b5-65ac-4625-9a2a-5908e049a70e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [e69847b5-65ac-4625-9a2a-5908e049a70e] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.003682154s
addons_test.go:572: (dbg) Run:  kubectl --context addons-925003 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-925003 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-925003 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-925003 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-925003 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-925003 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925003 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925003 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925003 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925003 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-925003 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [fdc33fdf-cc52-4143-b648-3c56b7d4f286] Pending
helpers_test.go:352: "task-pv-pod-restore" [fdc33fdf-cc52-4143-b648-3c56b7d4f286] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [fdc33fdf-cc52-4143-b648-3c56b7d4f286] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004529394s
addons_test.go:614: (dbg) Run:  kubectl --context addons-925003 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-925003 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-925003 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-925003 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-925003 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-925003 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.005232103s)
--- PASS: TestAddons/parallel/CSI (46.49s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-925003 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-lpjn5" [729a5673-85de-4a27-b238-0bf6683643e2] Pending
helpers_test.go:352: "headlamp-85f8f8dc54-lpjn5" [729a5673-85de-4a27-b238-0bf6683643e2] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-lpjn5" [729a5673-85de-4a27-b238-0bf6683643e2] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-lpjn5" [729a5673-85de-4a27-b238-0bf6683643e2] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.017220086s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-925003 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-925003 addons disable headlamp --alsologtostderr -v=1: (5.825462346s)
--- PASS: TestAddons/parallel/Headlamp (20.75s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.65s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-z6qc4" [643aed27-2643-4531-b04c-7fe050d8b85b] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004674123s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-925003 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.65s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (60.78s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-925003 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-925003 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925003 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925003 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925003 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925003 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925003 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925003 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925003 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925003 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-925003 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [76d53ab9-723d-44a4-b75a-3e48af05f9f4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [76d53ab9-723d-44a4-b75a-3e48af05f9f4] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [76d53ab9-723d-44a4-b75a-3e48af05f9f4] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 9.00361657s
addons_test.go:967: (dbg) Run:  kubectl --context addons-925003 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-925003 ssh "cat /opt/local-path-provisioner/pvc-6c53a42d-a019-4ceb-9ee4-98d0f0f5ced2_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-925003 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-925003 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-925003 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-925003 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.940776478s)
--- PASS: TestAddons/parallel/LocalPath (60.78s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.94s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-b2hkn" [9a68947b-5baf-4caf-8409-6d59793d7c62] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.148251998s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-925003 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.94s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-pfr66" [40f9cbd2-04af-442e-b05e-14790bae5542] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004769497s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-925003 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-925003 addons disable yakd --alsologtostderr -v=1: (5.806042169s)
--- PASS: TestAddons/parallel/Yakd (10.81s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (85.14s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-925003
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-925003: (1m24.929865359s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-925003
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-925003
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-925003
--- PASS: TestAddons/StoppedEnableDisable (85.14s)

                                                
                                    
x
+
TestCertOptions (67.2s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-177491 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-177491 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m5.12554095s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-177491 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-177491 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-177491 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-177491" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-177491
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-177491: (1.473639284s)
--- PASS: TestCertOptions (67.20s)

                                                
                                    
x
+
TestCertExpiration (289.3s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-327953 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
E1003 18:41:36.172772   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-327953 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m5.044132018s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-327953 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-327953 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (43.398308743s)
helpers_test.go:175: Cleaning up "cert-expiration-327953" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-327953
--- PASS: TestCertExpiration (289.30s)

                                                
                                    
x
+
TestForceSystemdFlag (61.65s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-670426 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-670426 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m0.474620435s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-670426 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-670426" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-670426
--- PASS: TestForceSystemdFlag (61.65s)

                                                
                                    
x
+
TestForceSystemdEnv (60.04s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-924555 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1003 18:41:19.249025   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-924555 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (59.125111213s)
helpers_test.go:175: Cleaning up "force-systemd-env-924555" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-924555
--- PASS: TestForceSystemdEnv (60.04s)

                                                
                                    
x
+
TestErrorSpam/setup (37.91s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-768040 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-768040 --driver=kvm2  --container-runtime=crio
E1003 17:51:36.174011   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 17:51:36.180451   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 17:51:36.191920   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 17:51:36.213469   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 17:51:36.255012   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 17:51:36.336482   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 17:51:36.498083   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 17:51:36.819866   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 17:51:37.462033   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 17:51:38.743816   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 17:51:41.306381   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 17:51:46.428118   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-768040 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-768040 --driver=kvm2  --container-runtime=crio: (37.914412876s)
--- PASS: TestErrorSpam/setup (37.91s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-768040 --log_dir /tmp/nospam-768040 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-768040 --log_dir /tmp/nospam-768040 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-768040 --log_dir /tmp/nospam-768040 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.66s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-768040 --log_dir /tmp/nospam-768040 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-768040 --log_dir /tmp/nospam-768040 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-768040 --log_dir /tmp/nospam-768040 status
--- PASS: TestErrorSpam/status (0.66s)

                                                
                                    
x
+
TestErrorSpam/pause (1.58s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-768040 --log_dir /tmp/nospam-768040 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-768040 --log_dir /tmp/nospam-768040 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-768040 --log_dir /tmp/nospam-768040 pause
--- PASS: TestErrorSpam/pause (1.58s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.8s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-768040 --log_dir /tmp/nospam-768040 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-768040 --log_dir /tmp/nospam-768040 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-768040 --log_dir /tmp/nospam-768040 unpause
--- PASS: TestErrorSpam/unpause (1.80s)

                                                
                                    
x
+
TestErrorSpam/stop (5.5s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-768040 --log_dir /tmp/nospam-768040 stop
E1003 17:51:56.670331   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-768040 --log_dir /tmp/nospam-768040 stop: (1.89632922s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-768040 --log_dir /tmp/nospam-768040 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-768040 --log_dir /tmp/nospam-768040 stop: (1.897630189s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-768040 --log_dir /tmp/nospam-768040 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-768040 --log_dir /tmp/nospam-768040 stop: (1.701145214s)
--- PASS: TestErrorSpam/stop (5.50s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21625-8656/.minikube/files/etc/test/nested/copy/12564/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (83.85s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-965419 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1003 17:52:17.152410   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 17:52:58.115335   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-965419 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m23.846405872s)
--- PASS: TestFunctional/serial/StartWithProxy (83.85s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (32.79s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1003 17:53:26.423240   12564 config.go:182] Loaded profile config "functional-965419": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-965419 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-965419 --alsologtostderr -v=8: (32.793414443s)
functional_test.go:678: soft start took 32.794034138s for "functional-965419" cluster.
I1003 17:53:59.217010   12564 config.go:182] Loaded profile config "functional-965419": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (32.79s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-965419 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-965419 cache add registry.k8s.io/pause:3.1: (1.115514327s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-965419 cache add registry.k8s.io/pause:3.3: (1.281559369s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-965419 cache add registry.k8s.io/pause:latest: (1.13061801s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-965419 /tmp/TestFunctionalserialCacheCmdcacheadd_local2154306586/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 cache add minikube-local-cache-test:functional-965419
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-965419 cache add minikube-local-cache-test:functional-965419: (1.78893461s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 cache delete minikube-local-cache-test:functional-965419
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-965419
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-965419 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (184.160482ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-965419 cache reload: (1.014301839s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 kubectl -- --context functional-965419 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-965419 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (46.4s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-965419 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1003 17:54:20.038954   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-965419 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (46.39725441s)
functional_test.go:776: restart took 46.397386973s for "functional-965419" cluster.
I1003 17:54:53.743635   12564 config.go:182] Loaded profile config "functional-965419": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (46.40s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-965419 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-965419 logs: (1.499249759s)
--- PASS: TestFunctional/serial/LogsCmd (1.50s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 logs --file /tmp/TestFunctionalserialLogsFileCmd2327889054/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-965419 logs --file /tmp/TestFunctionalserialLogsFileCmd2327889054/001/logs.txt: (1.51314457s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.84s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-965419 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-965419
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-965419: exit status 115 (239.011081ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.53:31404 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-965419 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.84s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-965419 config get cpus: exit status 14 (66.204968ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-965419 config get cpus: exit status 14 (66.644212ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (30.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-965419 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-965419 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 18773: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (30.17s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-965419 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-965419 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (125.744641ms)

                                                
                                                
-- stdout --
	* [functional-965419] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21625
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21625-8656/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8656/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:55:12.169111   18730 out.go:360] Setting OutFile to fd 1 ...
	I1003 17:55:12.169221   18730 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 17:55:12.169230   18730 out.go:374] Setting ErrFile to fd 2...
	I1003 17:55:12.169234   18730 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 17:55:12.169434   18730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8656/.minikube/bin
	I1003 17:55:12.169864   18730 out.go:368] Setting JSON to false
	I1003 17:55:12.170772   18730 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2256,"bootTime":1759511856,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 17:55:12.170889   18730 start.go:140] virtualization: kvm guest
	I1003 17:55:12.172926   18730 out.go:179] * [functional-965419] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1003 17:55:12.174605   18730 notify.go:220] Checking for updates...
	I1003 17:55:12.174618   18730 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 17:55:12.175967   18730 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:55:12.177338   18730 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8656/kubeconfig
	I1003 17:55:12.178765   18730 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8656/.minikube
	I1003 17:55:12.180357   18730 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 17:55:12.181811   18730 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:55:12.183774   18730 config.go:182] Loaded profile config "functional-965419": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 17:55:12.184233   18730 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 17:55:12.216874   18730 out.go:179] * Using the kvm2 driver based on existing profile
	I1003 17:55:12.218317   18730 start.go:304] selected driver: kvm2
	I1003 17:55:12.218335   18730 start.go:924] validating driver "kvm2" against &{Name:functional-965419 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-965419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 17:55:12.218480   18730 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:55:12.220766   18730 out.go:203] 
	W1003 17:55:12.222357   18730 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1003 17:55:12.223999   18730 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-965419 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-965419 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-965419 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (123.581919ms)

                                                
                                                
-- stdout --
	* [functional-965419] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21625
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21625-8656/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8656/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 17:55:10.365892   18589 out.go:360] Setting OutFile to fd 1 ...
	I1003 17:55:10.366023   18589 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 17:55:10.366034   18589 out.go:374] Setting ErrFile to fd 2...
	I1003 17:55:10.366042   18589 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 17:55:10.366407   18589 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8656/.minikube/bin
	I1003 17:55:10.366927   18589 out.go:368] Setting JSON to false
	I1003 17:55:10.367763   18589 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2254,"bootTime":1759511856,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 17:55:10.367881   18589 start.go:140] virtualization: kvm guest
	I1003 17:55:10.369767   18589 out.go:179] * [functional-965419] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1003 17:55:10.371490   18589 notify.go:220] Checking for updates...
	I1003 17:55:10.371597   18589 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 17:55:10.373329   18589 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 17:55:10.374933   18589 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8656/kubeconfig
	I1003 17:55:10.376374   18589 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8656/.minikube
	I1003 17:55:10.377674   18589 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 17:55:10.379218   18589 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 17:55:10.381747   18589 config.go:182] Loaded profile config "functional-965419": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 17:55:10.382611   18589 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 17:55:10.416272   18589 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1003 17:55:10.417486   18589 start.go:304] selected driver: kvm2
	I1003 17:55:10.417502   18589 start.go:924] validating driver "kvm2" against &{Name:functional-965419 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-965419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.53 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 17:55:10.417604   18589 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 17:55:10.419776   18589 out.go:203] 
	W1003 17:55:10.421002   18589 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1003 17:55:10.422325   18589 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-965419 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-965419 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-gs7dh" [6d8a131d-2e91-4b87-b006-9cf4469eb428] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-gs7dh" [6d8a131d-2e91-4b87-b006-9cf4469eb428] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.004250957s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.53:30554
functional_test.go:1680: http://192.168.39.53:30554: success! body:
Request served by hello-node-connect-7d85dfc575-gs7dh

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.53:30554
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (13.50s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (45.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [69406cfc-d019-480e-aec3-b85c9f645d9f] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.007148067s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-965419 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-965419 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-965419 get pvc myclaim -o=json
I1003 17:55:07.415240   12564 retry.go:31] will retry after 1.320249833s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:55826dc4-d167-4874-af02-f8e65b4cbd50 ResourceVersion:718 Generation:0 CreationTimestamp:2025-10-03 17:55:07 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001709740 VolumeMode:0xc001709750 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-965419 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-965419 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [69ca5d3d-3e7a-4e42-8844-e0197e1cf3e6] Pending
helpers_test.go:352: "sp-pod" [69ca5d3d-3e7a-4e42-8844-e0197e1cf3e6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [69ca5d3d-3e7a-4e42-8844-e0197e1cf3e6] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004715188s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-965419 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-965419 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-965419 delete -f testdata/storage-provisioner/pod.yaml: (1.168761402s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-965419 apply -f testdata/storage-provisioner/pod.yaml
I1003 17:55:25.410531   12564 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [c1544e73-44f5-4a25-8431-52a4f90ecc7f] Pending
helpers_test.go:352: "sp-pod" [c1544e73-44f5-4a25-8431-52a4f90ecc7f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [c1544e73-44f5-4a25-8431-52a4f90ecc7f] Running
2025/10/03 17:55:42 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.004878578s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-965419 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (45.42s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 ssh -n functional-965419 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 cp functional-965419:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3849043182/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 ssh -n functional-965419 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 ssh -n functional-965419 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-965419 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-5mbjc" [706004bd-95d9-4ffc-b29d-15d026caae99] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-5mbjc" [706004bd-95d9-4ffc-b29d-15d026caae99] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.788487936s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-965419 exec mysql-5bb876957f-5mbjc -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-965419 exec mysql-5bb876957f-5mbjc -- mysql -ppassword -e "show databases;": exit status 1 (520.946382ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1003 17:55:35.985008   12564 retry.go:31] will retry after 802.112443ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-965419 exec mysql-5bb876957f-5mbjc -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-965419 exec mysql-5bb876957f-5mbjc -- mysql -ppassword -e "show databases;": exit status 1 (120.316575ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1003 17:55:36.907959   12564 retry.go:31] will retry after 1.784145885s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-965419 exec mysql-5bb876957f-5mbjc -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.36s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/12564/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 ssh "sudo cat /etc/test/nested/copy/12564/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/12564.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 ssh "sudo cat /etc/ssl/certs/12564.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/12564.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 ssh "sudo cat /usr/share/ca-certificates/12564.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/125642.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 ssh "sudo cat /etc/ssl/certs/125642.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/125642.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 ssh "sudo cat /usr/share/ca-certificates/125642.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-965419 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-965419 ssh "sudo systemctl is-active docker": exit status 1 (172.230045ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-965419 ssh "sudo systemctl is-active containerd": exit status 1 (181.388925ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-965419 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-965419
localhost/kicbase/echo-server:functional-965419
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-965419 image ls --format short --alsologtostderr:
I1003 17:55:39.477981   19106 out.go:360] Setting OutFile to fd 1 ...
I1003 17:55:39.478230   19106 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1003 17:55:39.478239   19106 out.go:374] Setting ErrFile to fd 2...
I1003 17:55:39.478243   19106 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1003 17:55:39.478432   19106 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8656/.minikube/bin
I1003 17:55:39.478989   19106 config.go:182] Loaded profile config "functional-965419": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1003 17:55:39.479079   19106 config.go:182] Loaded profile config "functional-965419": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1003 17:55:39.481004   19106 ssh_runner.go:195] Run: systemctl --version
I1003 17:55:39.483607   19106 main.go:141] libmachine: domain functional-965419 has defined MAC address 52:54:00:f2:6c:4f in network mk-functional-965419
I1003 17:55:39.484184   19106 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:6c:4f", ip: ""} in network mk-functional-965419: {Iface:virbr1 ExpiryTime:2025-10-03 18:52:17 +0000 UTC Type:0 Mac:52:54:00:f2:6c:4f Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:functional-965419 Clientid:01:52:54:00:f2:6c:4f}
I1003 17:55:39.484215   19106 main.go:141] libmachine: domain functional-965419 has defined IP address 192.168.39.53 and MAC address 52:54:00:f2:6c:4f in network mk-functional-965419
I1003 17:55:39.484386   19106 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/functional-965419/id_rsa Username:docker}
I1003 17:55:39.567436   19106 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-965419 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/nginx                 │ latest             │ 203ad09fc1566 │ 197MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/minikube-local-cache-test     │ functional-965419  │ f6d741c7d07a7 │ 3.33kB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-965419  │ 9056ab77afb8e │ 4.95MB │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-965419 image ls --format table --alsologtostderr:
I1003 17:55:42.807468   19178 out.go:360] Setting OutFile to fd 1 ...
I1003 17:55:42.807706   19178 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1003 17:55:42.807715   19178 out.go:374] Setting ErrFile to fd 2...
I1003 17:55:42.807719   19178 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1003 17:55:42.807898   19178 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8656/.minikube/bin
I1003 17:55:42.808450   19178 config.go:182] Loaded profile config "functional-965419": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1003 17:55:42.808579   19178 config.go:182] Loaded profile config "functional-965419": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1003 17:55:42.810653   19178 ssh_runner.go:195] Run: systemctl --version
I1003 17:55:42.813167   19178 main.go:141] libmachine: domain functional-965419 has defined MAC address 52:54:00:f2:6c:4f in network mk-functional-965419
I1003 17:55:42.813758   19178 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:6c:4f", ip: ""} in network mk-functional-965419: {Iface:virbr1 ExpiryTime:2025-10-03 18:52:17 +0000 UTC Type:0 Mac:52:54:00:f2:6c:4f Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:functional-965419 Clientid:01:52:54:00:f2:6c:4f}
I1003 17:55:42.813842   19178 main.go:141] libmachine: domain functional-965419 has defined IP address 192.168.39.53 and MAC address 52:54:00:f2:6c:4f in network mk-functional-965419
I1003 17:55:42.814089   19178 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/functional-965419/id_rsa Username:docker}
I1003 17:55:42.901517   19178 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-965419 image ls --format json --alsologtostderr:
[{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/ku
be-apiserver:v1.34.1"],"size":"89046001"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"
9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-965419"],"size":"4945246"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.i
o/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"203ad09fc1566a329c1d2af8d1f219b28fd2c00b69e743bd572b7f662365432d","repoDigests":["docker.io/library/nginx@sha256:17ae566734b63632e543c907ba74757e0c1a25d812ab9f10a07a6bed98dd199c","docker.io/library/nginx@sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc"],"repoTags":["docker.io/library/nginx:latest"],"size":"196550530"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bb
cf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","g
cr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha25
6:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"f6d741c7d07a74db302cf3c9a10849498772e4a500fede7393dfe57884e7c17b","repoDigests":["localhost/minikube-local-cache-test@sha256:ce374b01192ec5c5c59a9c0ff90a3c6a51b6601126d0b0c16825f8a6e048bb9d"],"repoTags":["localhost/minikube-local-cache-test:functional-965419"],"size":"3326"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha25
6:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-965419 image ls --format json --alsologtostderr:
I1003 17:55:42.584622   19167 out.go:360] Setting OutFile to fd 1 ...
I1003 17:55:42.584893   19167 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1003 17:55:42.584907   19167 out.go:374] Setting ErrFile to fd 2...
I1003 17:55:42.584911   19167 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1003 17:55:42.585082   19167 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8656/.minikube/bin
I1003 17:55:42.585694   19167 config.go:182] Loaded profile config "functional-965419": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1003 17:55:42.585854   19167 config.go:182] Loaded profile config "functional-965419": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1003 17:55:42.588114   19167 ssh_runner.go:195] Run: systemctl --version
I1003 17:55:42.590791   19167 main.go:141] libmachine: domain functional-965419 has defined MAC address 52:54:00:f2:6c:4f in network mk-functional-965419
I1003 17:55:42.591415   19167 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:6c:4f", ip: ""} in network mk-functional-965419: {Iface:virbr1 ExpiryTime:2025-10-03 18:52:17 +0000 UTC Type:0 Mac:52:54:00:f2:6c:4f Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:functional-965419 Clientid:01:52:54:00:f2:6c:4f}
I1003 17:55:42.591444   19167 main.go:141] libmachine: domain functional-965419 has defined IP address 192.168.39.53 and MAC address 52:54:00:f2:6c:4f in network mk-functional-965419
I1003 17:55:42.591638   19167 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/functional-965419/id_rsa Username:docker}
I1003 17:55:42.684368   19167 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-965419 image ls --format yaml --alsologtostderr:
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-965419
size: "4945246"
- id: f6d741c7d07a74db302cf3c9a10849498772e4a500fede7393dfe57884e7c17b
repoDigests:
- localhost/minikube-local-cache-test@sha256:ce374b01192ec5c5c59a9c0ff90a3c6a51b6601126d0b0c16825f8a6e048bb9d
repoTags:
- localhost/minikube-local-cache-test:functional-965419
size: "3326"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 203ad09fc1566a329c1d2af8d1f219b28fd2c00b69e743bd572b7f662365432d
repoDigests:
- docker.io/library/nginx@sha256:17ae566734b63632e543c907ba74757e0c1a25d812ab9f10a07a6bed98dd199c
- docker.io/library/nginx@sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc
repoTags:
- docker.io/library/nginx:latest
size: "196550530"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-965419 image ls --format yaml --alsologtostderr:
I1003 17:55:39.674122   19117 out.go:360] Setting OutFile to fd 1 ...
I1003 17:55:39.674219   19117 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1003 17:55:39.674226   19117 out.go:374] Setting ErrFile to fd 2...
I1003 17:55:39.674230   19117 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1003 17:55:39.674426   19117 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8656/.minikube/bin
I1003 17:55:39.674985   19117 config.go:182] Loaded profile config "functional-965419": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1003 17:55:39.675073   19117 config.go:182] Loaded profile config "functional-965419": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1003 17:55:39.677028   19117 ssh_runner.go:195] Run: systemctl --version
I1003 17:55:39.679235   19117 main.go:141] libmachine: domain functional-965419 has defined MAC address 52:54:00:f2:6c:4f in network mk-functional-965419
I1003 17:55:39.679664   19117 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:6c:4f", ip: ""} in network mk-functional-965419: {Iface:virbr1 ExpiryTime:2025-10-03 18:52:17 +0000 UTC Type:0 Mac:52:54:00:f2:6c:4f Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:functional-965419 Clientid:01:52:54:00:f2:6c:4f}
I1003 17:55:39.679696   19117 main.go:141] libmachine: domain functional-965419 has defined IP address 192.168.39.53 and MAC address 52:54:00:f2:6c:4f in network mk-functional-965419
I1003 17:55:39.679893   19117 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/functional-965419/id_rsa Username:docker}
I1003 17:55:39.764138   19117 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-965419 ssh pgrep buildkitd: exit status 1 (156.890374ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 image build -t localhost/my-image:functional-965419 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-965419 image build -t localhost/my-image:functional-965419 testdata/build --alsologtostderr: (3.565721451s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-965419 image build -t localhost/my-image:functional-965419 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 9d96bec5645
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-965419
--> 5fba999c508
Successfully tagged localhost/my-image:functional-965419
5fba999c50877ad2f49b08db8c93c8b0900430bff8714f6cbacdc2bda8844a3f
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-965419 image build -t localhost/my-image:functional-965419 testdata/build --alsologtostderr:
I1003 17:55:40.025415   19155 out.go:360] Setting OutFile to fd 1 ...
I1003 17:55:40.025689   19155 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1003 17:55:40.025698   19155 out.go:374] Setting ErrFile to fd 2...
I1003 17:55:40.025702   19155 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1003 17:55:40.025896   19155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8656/.minikube/bin
I1003 17:55:40.026455   19155 config.go:182] Loaded profile config "functional-965419": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1003 17:55:40.027129   19155 config.go:182] Loaded profile config "functional-965419": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1003 17:55:40.029531   19155 ssh_runner.go:195] Run: systemctl --version
I1003 17:55:40.032358   19155 main.go:141] libmachine: domain functional-965419 has defined MAC address 52:54:00:f2:6c:4f in network mk-functional-965419
I1003 17:55:40.032929   19155 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f2:6c:4f", ip: ""} in network mk-functional-965419: {Iface:virbr1 ExpiryTime:2025-10-03 18:52:17 +0000 UTC Type:0 Mac:52:54:00:f2:6c:4f Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:functional-965419 Clientid:01:52:54:00:f2:6c:4f}
I1003 17:55:40.032960   19155 main.go:141] libmachine: domain functional-965419 has defined IP address 192.168.39.53 and MAC address 52:54:00:f2:6c:4f in network mk-functional-965419
I1003 17:55:40.033166   19155 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/functional-965419/id_rsa Username:docker}
I1003 17:55:40.116267   19155 build_images.go:161] Building image from path: /tmp/build.1059984258.tar
I1003 17:55:40.116360   19155 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1003 17:55:40.130486   19155 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1059984258.tar
I1003 17:55:40.135442   19155 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1059984258.tar: stat -c "%s %y" /var/lib/minikube/build/build.1059984258.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1059984258.tar': No such file or directory
I1003 17:55:40.135480   19155 ssh_runner.go:362] scp /tmp/build.1059984258.tar --> /var/lib/minikube/build/build.1059984258.tar (3072 bytes)
I1003 17:55:40.167435   19155 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1059984258
I1003 17:55:40.185672   19155 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1059984258 -xf /var/lib/minikube/build/build.1059984258.tar
I1003 17:55:40.211743   19155 crio.go:315] Building image: /var/lib/minikube/build/build.1059984258
I1003 17:55:40.211829   19155 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-965419 /var/lib/minikube/build/build.1059984258 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1003 17:55:43.500367   19155 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-965419 /var/lib/minikube/build/build.1059984258 --cgroup-manager=cgroupfs: (3.288509648s)
I1003 17:55:43.500523   19155 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1059984258
I1003 17:55:43.515344   19155 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1059984258.tar
I1003 17:55:43.528530   19155 build_images.go:217] Built localhost/my-image:functional-965419 from /tmp/build.1059984258.tar
I1003 17:55:43.528575   19155 build_images.go:133] succeeded building to: functional-965419
I1003 17:55:43.528582   19155 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.77577085s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-965419
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-965419 /tmp/TestFunctionalparallelMountCmdany-port3221698577/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759514101082385487" to /tmp/TestFunctionalparallelMountCmdany-port3221698577/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759514101082385487" to /tmp/TestFunctionalparallelMountCmdany-port3221698577/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759514101082385487" to /tmp/TestFunctionalparallelMountCmdany-port3221698577/001/test-1759514101082385487
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-965419 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (171.043284ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1003 17:55:01.253856   12564 retry.go:31] will retry after 564.106669ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  3 17:55 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  3 17:55 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  3 17:55 test-1759514101082385487
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 ssh cat /mount-9p/test-1759514101082385487
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-965419 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [a7f3c77a-adb4-4166-b565-43c5336b0fcd] Pending
helpers_test.go:352: "busybox-mount" [a7f3c77a-adb4-4166-b565-43c5336b0fcd] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [a7f3c77a-adb4-4166-b565-43c5336b0fcd] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [a7f3c77a-adb4-4166-b565-43c5336b0fcd] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.00687445s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-965419 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 ssh stat /mount-9p/created-by-pod
I1003 17:55:08.955400   12564 detect.go:223] nested VM detected
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-965419 /tmp/TestFunctionalparallelMountCmdany-port3221698577/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "244.44797ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "63.428606ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "251.117619ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "71.629899ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 image load --daemon kicbase/echo-server:functional-965419 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-965419 image load --daemon kicbase/echo-server:functional-965419 --alsologtostderr: (1.39913903s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 image load --daemon kicbase/echo-server:functional-965419 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-965419
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 image load --daemon kicbase/echo-server:functional-965419 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 image save kicbase/echo-server:functional-965419 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 image rm kicbase/echo-server:functional-965419 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-965419
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 image save --daemon kicbase/echo-server:functional-965419 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-965419
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-965419 /tmp/TestFunctionalparallelMountCmdspecific-port2707721757/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-965419 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (178.232943ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1003 17:55:09.473110   12564 retry.go:31] will retry after 563.790205ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-965419 /tmp/TestFunctionalparallelMountCmdspecific-port2707721757/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-965419 ssh "sudo umount -f /mount-9p": exit status 1 (171.228888ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-965419 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-965419 /tmp/TestFunctionalparallelMountCmdspecific-port2707721757/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-965419 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1379254547/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-965419 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1379254547/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-965419 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1379254547/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-965419 ssh "findmnt -T" /mount1: exit status 1 (181.566601ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1003 17:55:10.924703   12564 retry.go:31] will retry after 401.625224ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-965419 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-965419 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1379254547/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-965419 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1379254547/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-965419 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1379254547/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (29.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-965419 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-965419 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-s84k5" [93e75686-290e-4d70-bacf-016735112aac] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-s84k5" [93e75686-290e-4d70-bacf-016735112aac] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 29.004110376s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (29.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-965419 service list: (1.209787299s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-965419 service list -o json: (1.209608768s)
functional_test.go:1504: Took "1.209694643s" to run "out/minikube-linux-amd64 -p functional-965419 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.53:31614
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-965419 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.53:31614
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.24s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-965419
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-965419
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-965419
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (234.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1003 17:56:36.172910   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 17:57:03.881882   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-361825 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m53.454855153s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (234.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-361825 kubectl -- rollout status deployment/busybox: (5.02149539s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 kubectl -- exec busybox-7b57f96db7-lcdhp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 kubectl -- exec busybox-7b57f96db7-qnflf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 kubectl -- exec busybox-7b57f96db7-rv4zb -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 kubectl -- exec busybox-7b57f96db7-lcdhp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 kubectl -- exec busybox-7b57f96db7-qnflf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 kubectl -- exec busybox-7b57f96db7-rv4zb -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 kubectl -- exec busybox-7b57f96db7-lcdhp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 kubectl -- exec busybox-7b57f96db7-qnflf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 kubectl -- exec busybox-7b57f96db7-rv4zb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 kubectl -- exec busybox-7b57f96db7-lcdhp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 kubectl -- exec busybox-7b57f96db7-lcdhp -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 kubectl -- exec busybox-7b57f96db7-qnflf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 kubectl -- exec busybox-7b57f96db7-qnflf -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 kubectl -- exec busybox-7b57f96db7-rv4zb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 kubectl -- exec busybox-7b57f96db7-rv4zb -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (47.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 node add --alsologtostderr -v 5
E1003 18:00:01.128060   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/functional-965419/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:00:01.134562   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/functional-965419/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:00:01.146110   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/functional-965419/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:00:01.167587   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/functional-965419/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:00:01.209142   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/functional-965419/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:00:01.290714   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/functional-965419/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:00:01.452151   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/functional-965419/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:00:01.773473   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/functional-965419/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:00:02.415054   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/functional-965419/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:00:03.697025   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/functional-965419/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:00:06.259266   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/functional-965419/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:00:11.381566   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/functional-965419/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:00:21.623863   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/functional-965419/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-361825 node add --alsologtostderr -v 5: (46.756016385s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (47.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-361825 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1003 18:00:42.105444   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/functional-965419/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 cp testdata/cp-test.txt ha-361825:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 ssh -n ha-361825 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 cp ha-361825:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4253336761/001/cp-test_ha-361825.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 ssh -n ha-361825 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 cp ha-361825:/home/docker/cp-test.txt ha-361825-m02:/home/docker/cp-test_ha-361825_ha-361825-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 ssh -n ha-361825 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 ssh -n ha-361825-m02 "sudo cat /home/docker/cp-test_ha-361825_ha-361825-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 cp ha-361825:/home/docker/cp-test.txt ha-361825-m03:/home/docker/cp-test_ha-361825_ha-361825-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 ssh -n ha-361825 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 ssh -n ha-361825-m03 "sudo cat /home/docker/cp-test_ha-361825_ha-361825-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 cp ha-361825:/home/docker/cp-test.txt ha-361825-m04:/home/docker/cp-test_ha-361825_ha-361825-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 ssh -n ha-361825 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 ssh -n ha-361825-m04 "sudo cat /home/docker/cp-test_ha-361825_ha-361825-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 cp testdata/cp-test.txt ha-361825-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 ssh -n ha-361825-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 cp ha-361825-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4253336761/001/cp-test_ha-361825-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 ssh -n ha-361825-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 cp ha-361825-m02:/home/docker/cp-test.txt ha-361825:/home/docker/cp-test_ha-361825-m02_ha-361825.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 ssh -n ha-361825-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 ssh -n ha-361825 "sudo cat /home/docker/cp-test_ha-361825-m02_ha-361825.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 cp ha-361825-m02:/home/docker/cp-test.txt ha-361825-m03:/home/docker/cp-test_ha-361825-m02_ha-361825-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 ssh -n ha-361825-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 ssh -n ha-361825-m03 "sudo cat /home/docker/cp-test_ha-361825-m02_ha-361825-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 cp ha-361825-m02:/home/docker/cp-test.txt ha-361825-m04:/home/docker/cp-test_ha-361825-m02_ha-361825-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 ssh -n ha-361825-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 ssh -n ha-361825-m04 "sudo cat /home/docker/cp-test_ha-361825-m02_ha-361825-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 cp testdata/cp-test.txt ha-361825-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 ssh -n ha-361825-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 cp ha-361825-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4253336761/001/cp-test_ha-361825-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 ssh -n ha-361825-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 cp ha-361825-m03:/home/docker/cp-test.txt ha-361825:/home/docker/cp-test_ha-361825-m03_ha-361825.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 ssh -n ha-361825-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 ssh -n ha-361825 "sudo cat /home/docker/cp-test_ha-361825-m03_ha-361825.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 cp ha-361825-m03:/home/docker/cp-test.txt ha-361825-m02:/home/docker/cp-test_ha-361825-m03_ha-361825-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 ssh -n ha-361825-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 ssh -n ha-361825-m02 "sudo cat /home/docker/cp-test_ha-361825-m03_ha-361825-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 cp ha-361825-m03:/home/docker/cp-test.txt ha-361825-m04:/home/docker/cp-test_ha-361825-m03_ha-361825-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 ssh -n ha-361825-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 ssh -n ha-361825-m04 "sudo cat /home/docker/cp-test_ha-361825-m03_ha-361825-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 cp testdata/cp-test.txt ha-361825-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 ssh -n ha-361825-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 cp ha-361825-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4253336761/001/cp-test_ha-361825-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 ssh -n ha-361825-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 cp ha-361825-m04:/home/docker/cp-test.txt ha-361825:/home/docker/cp-test_ha-361825-m04_ha-361825.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 ssh -n ha-361825-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 ssh -n ha-361825 "sudo cat /home/docker/cp-test_ha-361825-m04_ha-361825.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 cp ha-361825-m04:/home/docker/cp-test.txt ha-361825-m02:/home/docker/cp-test_ha-361825-m04_ha-361825-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 ssh -n ha-361825-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 ssh -n ha-361825-m02 "sudo cat /home/docker/cp-test_ha-361825-m04_ha-361825-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 cp ha-361825-m04:/home/docker/cp-test.txt ha-361825-m03:/home/docker/cp-test_ha-361825-m04_ha-361825-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 ssh -n ha-361825-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 ssh -n ha-361825-m03 "sudo cat /home/docker/cp-test_ha-361825-m04_ha-361825-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (74.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 node stop m02 --alsologtostderr -v 5
E1003 18:01:23.066894   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/functional-965419/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:01:36.173716   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-361825 node stop m02 --alsologtostderr -v 5: (1m13.748214028s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-361825 status --alsologtostderr -v 5: exit status 7 (519.061629ms)

                                                
                                                
-- stdout --
	ha-361825
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-361825-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-361825-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-361825-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:02:06.954411   22343 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:02:06.954944   22343 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:02:06.954960   22343 out.go:374] Setting ErrFile to fd 2...
	I1003 18:02:06.954968   22343 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:02:06.955204   22343 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8656/.minikube/bin
	I1003 18:02:06.955383   22343 out.go:368] Setting JSON to false
	I1003 18:02:06.955414   22343 mustload.go:65] Loading cluster: ha-361825
	I1003 18:02:06.955478   22343 notify.go:220] Checking for updates...
	I1003 18:02:06.955773   22343 config.go:182] Loaded profile config "ha-361825": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:02:06.955802   22343 status.go:174] checking status of ha-361825 ...
	I1003 18:02:06.958182   22343 status.go:371] ha-361825 host status = "Running" (err=<nil>)
	I1003 18:02:06.958199   22343 host.go:66] Checking if "ha-361825" exists ...
	I1003 18:02:06.960878   22343 main.go:141] libmachine: domain ha-361825 has defined MAC address 52:54:00:b2:df:23 in network mk-ha-361825
	I1003 18:02:06.961481   22343 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:df:23", ip: ""} in network mk-ha-361825: {Iface:virbr1 ExpiryTime:2025-10-03 18:56:06 +0000 UTC Type:0 Mac:52:54:00:b2:df:23 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-361825 Clientid:01:52:54:00:b2:df:23}
	I1003 18:02:06.961515   22343 main.go:141] libmachine: domain ha-361825 has defined IP address 192.168.39.22 and MAC address 52:54:00:b2:df:23 in network mk-ha-361825
	I1003 18:02:06.961685   22343 host.go:66] Checking if "ha-361825" exists ...
	I1003 18:02:06.961974   22343 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:02:06.964685   22343 main.go:141] libmachine: domain ha-361825 has defined MAC address 52:54:00:b2:df:23 in network mk-ha-361825
	I1003 18:02:06.965207   22343 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b2:df:23", ip: ""} in network mk-ha-361825: {Iface:virbr1 ExpiryTime:2025-10-03 18:56:06 +0000 UTC Type:0 Mac:52:54:00:b2:df:23 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-361825 Clientid:01:52:54:00:b2:df:23}
	I1003 18:02:06.965238   22343 main.go:141] libmachine: domain ha-361825 has defined IP address 192.168.39.22 and MAC address 52:54:00:b2:df:23 in network mk-ha-361825
	I1003 18:02:06.965413   22343 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/ha-361825/id_rsa Username:docker}
	I1003 18:02:07.049341   22343 ssh_runner.go:195] Run: systemctl --version
	I1003 18:02:07.057362   22343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:02:07.076079   22343 kubeconfig.go:125] found "ha-361825" server: "https://192.168.39.254:8443"
	I1003 18:02:07.076113   22343 api_server.go:166] Checking apiserver status ...
	I1003 18:02:07.076150   22343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:02:07.097917   22343 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1399/cgroup
	W1003 18:02:07.110713   22343 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1399/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:02:07.110773   22343 ssh_runner.go:195] Run: ls
	I1003 18:02:07.116988   22343 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1003 18:02:07.124337   22343 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1003 18:02:07.124373   22343 status.go:463] ha-361825 apiserver status = Running (err=<nil>)
	I1003 18:02:07.124386   22343 status.go:176] ha-361825 status: &{Name:ha-361825 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 18:02:07.124412   22343 status.go:174] checking status of ha-361825-m02 ...
	I1003 18:02:07.126042   22343 status.go:371] ha-361825-m02 host status = "Stopped" (err=<nil>)
	I1003 18:02:07.126060   22343 status.go:384] host is not running, skipping remaining checks
	I1003 18:02:07.126067   22343 status.go:176] ha-361825-m02 status: &{Name:ha-361825-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 18:02:07.126085   22343 status.go:174] checking status of ha-361825-m03 ...
	I1003 18:02:07.127405   22343 status.go:371] ha-361825-m03 host status = "Running" (err=<nil>)
	I1003 18:02:07.127421   22343 host.go:66] Checking if "ha-361825-m03" exists ...
	I1003 18:02:07.130154   22343 main.go:141] libmachine: domain ha-361825-m03 has defined MAC address 52:54:00:0e:37:75 in network mk-ha-361825
	I1003 18:02:07.130649   22343 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0e:37:75", ip: ""} in network mk-ha-361825: {Iface:virbr1 ExpiryTime:2025-10-03 18:58:34 +0000 UTC Type:0 Mac:52:54:00:0e:37:75 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-361825-m03 Clientid:01:52:54:00:0e:37:75}
	I1003 18:02:07.130676   22343 main.go:141] libmachine: domain ha-361825-m03 has defined IP address 192.168.39.188 and MAC address 52:54:00:0e:37:75 in network mk-ha-361825
	I1003 18:02:07.130832   22343 host.go:66] Checking if "ha-361825-m03" exists ...
	I1003 18:02:07.131039   22343 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:02:07.133341   22343 main.go:141] libmachine: domain ha-361825-m03 has defined MAC address 52:54:00:0e:37:75 in network mk-ha-361825
	I1003 18:02:07.133709   22343 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0e:37:75", ip: ""} in network mk-ha-361825: {Iface:virbr1 ExpiryTime:2025-10-03 18:58:34 +0000 UTC Type:0 Mac:52:54:00:0e:37:75 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:ha-361825-m03 Clientid:01:52:54:00:0e:37:75}
	I1003 18:02:07.133737   22343 main.go:141] libmachine: domain ha-361825-m03 has defined IP address 192.168.39.188 and MAC address 52:54:00:0e:37:75 in network mk-ha-361825
	I1003 18:02:07.133892   22343 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/ha-361825-m03/id_rsa Username:docker}
	I1003 18:02:07.220096   22343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:02:07.240896   22343 kubeconfig.go:125] found "ha-361825" server: "https://192.168.39.254:8443"
	I1003 18:02:07.240925   22343 api_server.go:166] Checking apiserver status ...
	I1003 18:02:07.240979   22343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:02:07.269083   22343 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1797/cgroup
	W1003 18:02:07.282224   22343 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1797/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:02:07.282301   22343 ssh_runner.go:195] Run: ls
	I1003 18:02:07.289983   22343 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1003 18:02:07.295024   22343 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1003 18:02:07.295048   22343 status.go:463] ha-361825-m03 apiserver status = Running (err=<nil>)
	I1003 18:02:07.295055   22343 status.go:176] ha-361825-m03 status: &{Name:ha-361825-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 18:02:07.295070   22343 status.go:174] checking status of ha-361825-m04 ...
	I1003 18:02:07.296895   22343 status.go:371] ha-361825-m04 host status = "Running" (err=<nil>)
	I1003 18:02:07.296918   22343 host.go:66] Checking if "ha-361825-m04" exists ...
	I1003 18:02:07.300022   22343 main.go:141] libmachine: domain ha-361825-m04 has defined MAC address 52:54:00:1e:20:e2 in network mk-ha-361825
	I1003 18:02:07.300418   22343 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:1e:20:e2", ip: ""} in network mk-ha-361825: {Iface:virbr1 ExpiryTime:2025-10-03 19:00:10 +0000 UTC Type:0 Mac:52:54:00:1e:20:e2 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-361825-m04 Clientid:01:52:54:00:1e:20:e2}
	I1003 18:02:07.300437   22343 main.go:141] libmachine: domain ha-361825-m04 has defined IP address 192.168.39.244 and MAC address 52:54:00:1e:20:e2 in network mk-ha-361825
	I1003 18:02:07.300592   22343 host.go:66] Checking if "ha-361825-m04" exists ...
	I1003 18:02:07.300821   22343 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:02:07.303221   22343 main.go:141] libmachine: domain ha-361825-m04 has defined MAC address 52:54:00:1e:20:e2 in network mk-ha-361825
	I1003 18:02:07.303734   22343 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:1e:20:e2", ip: ""} in network mk-ha-361825: {Iface:virbr1 ExpiryTime:2025-10-03 19:00:10 +0000 UTC Type:0 Mac:52:54:00:1e:20:e2 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-361825-m04 Clientid:01:52:54:00:1e:20:e2}
	I1003 18:02:07.303758   22343 main.go:141] libmachine: domain ha-361825-m04 has defined IP address 192.168.39.244 and MAC address 52:54:00:1e:20:e2 in network mk-ha-361825
	I1003 18:02:07.303962   22343 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/ha-361825-m04/id_rsa Username:docker}
	I1003 18:02:07.395542   22343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:02:07.414796   22343 status.go:176] ha-361825-m04 status: &{Name:ha-361825-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (74.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (35.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-361825 node start m02 --alsologtostderr -v 5: (35.139684218s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (35.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (368.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 stop --alsologtostderr -v 5
E1003 18:02:44.988890   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/functional-965419/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:05:01.132594   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/functional-965419/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:05:28.830915   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/functional-965419/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:06:36.173474   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-361825 stop --alsologtostderr -v 5: (4m6.749001402s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 start --wait true --alsologtostderr -v 5
E1003 18:07:59.243579   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-361825 start --wait true --alsologtostderr -v 5: (2m1.118622932s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (368.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-361825 node delete m03 --alsologtostderr -v 5: (17.700461944s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (243.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 stop --alsologtostderr -v 5
E1003 18:10:01.130045   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/functional-965419/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:11:36.175364   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-361825 stop --alsologtostderr -v 5: (4m3.917411012s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-361825 status --alsologtostderr -v 5: exit status 7 (64.641895ms)

                                                
                                                
-- stdout --
	ha-361825
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-361825-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-361825-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:13:15.624597   25621 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:13:15.624851   25621 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:13:15.624861   25621 out.go:374] Setting ErrFile to fd 2...
	I1003 18:13:15.624865   25621 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:13:15.625059   25621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8656/.minikube/bin
	I1003 18:13:15.625231   25621 out.go:368] Setting JSON to false
	I1003 18:13:15.625265   25621 mustload.go:65] Loading cluster: ha-361825
	I1003 18:13:15.625313   25621 notify.go:220] Checking for updates...
	I1003 18:13:15.625847   25621 config.go:182] Loaded profile config "ha-361825": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:13:15.625874   25621 status.go:174] checking status of ha-361825 ...
	I1003 18:13:15.628415   25621 status.go:371] ha-361825 host status = "Stopped" (err=<nil>)
	I1003 18:13:15.628433   25621 status.go:384] host is not running, skipping remaining checks
	I1003 18:13:15.628437   25621 status.go:176] ha-361825 status: &{Name:ha-361825 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 18:13:15.628454   25621 status.go:174] checking status of ha-361825-m02 ...
	I1003 18:13:15.629577   25621 status.go:371] ha-361825-m02 host status = "Stopped" (err=<nil>)
	I1003 18:13:15.629592   25621 status.go:384] host is not running, skipping remaining checks
	I1003 18:13:15.629598   25621 status.go:176] ha-361825-m02 status: &{Name:ha-361825-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 18:13:15.629608   25621 status.go:174] checking status of ha-361825-m04 ...
	I1003 18:13:15.630953   25621 status.go:371] ha-361825-m04 host status = "Stopped" (err=<nil>)
	I1003 18:13:15.630967   25621 status.go:384] host is not running, skipping remaining checks
	I1003 18:13:15.630971   25621 status.go:176] ha-361825-m04 status: &{Name:ha-361825-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (243.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (80.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-361825 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m20.304482298s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (80.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 node add --control-plane --alsologtostderr -v 5
E1003 18:15:01.128597   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/functional-965419/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-361825 node add --control-plane --alsologtostderr -v 5: (1m15.318213092s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-361825 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.69s)

                                                
                                    
x
+
TestJSONOutput/start/Command (77.55s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-695765 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E1003 18:16:24.195432   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/functional-965419/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:16:36.175262   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-695765 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m17.550645704s)
--- PASS: TestJSONOutput/start/Command (77.55s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-695765 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-695765 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.9s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-695765 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-695765 --output=json --user=testUser: (6.895512603s)
--- PASS: TestJSONOutput/stop/Command (6.90s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-412452 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-412452 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (83.2674ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5a06896e-610a-471f-80d2-12d1db4f1904","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-412452] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"639a1b2b-e3c0-460d-84d6-caf83fab0f84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21625"}}
	{"specversion":"1.0","id":"f7359f63-e738-4144-98c3-03577c34ea33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"066f1669-31af-453d-a410-ee3b810978bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21625-8656/kubeconfig"}}
	{"specversion":"1.0","id":"def98269-e56c-4eb8-b706-6136e4459fd9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8656/.minikube"}}
	{"specversion":"1.0","id":"66abc322-82c2-49b3-9545-1a6916e0d4ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"5b57450a-cf54-4e21-98cb-e89584f9fed7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"769ce4fe-9fbd-47e1-bdc3-200674c8b97f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-412452" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-412452
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (80.97s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-364722 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-364722 --driver=kvm2  --container-runtime=crio: (39.28129044s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-367730 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-367730 --driver=kvm2  --container-runtime=crio: (39.0809095s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-364722
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-367730
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-367730" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-367730
helpers_test.go:175: Cleaning up "first-364722" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-364722
--- PASS: TestMinikubeProfile (80.97s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (21.07s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-606938 --memory=3072 --mount-string /tmp/TestMountStartserial25433481/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-606938 --memory=3072 --mount-string /tmp/TestMountStartserial25433481/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (20.067164149s)
--- PASS: TestMountStart/serial/StartWithMountFirst (21.07s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-606938 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-606938 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (20.55s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-621118 --memory=3072 --mount-string /tmp/TestMountStartserial25433481/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-621118 --memory=3072 --mount-string /tmp/TestMountStartserial25433481/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (19.553523068s)
--- PASS: TestMountStart/serial/StartWithMountSecond (20.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-621118 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-621118 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-606938 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-621118 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-621118 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-621118
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-621118: (1.260180829s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (18.15s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-621118
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-621118: (17.149121152s)
--- PASS: TestMountStart/serial/RestartStopped (18.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-621118 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-621118 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (102.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-137840 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1003 18:20:01.128949   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/functional-965419/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-137840 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m42.491541911s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (102.85s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-137840 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-137840 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-137840 -- rollout status deployment/busybox: (4.305432685s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-137840 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-137840 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-137840 -- exec busybox-7b57f96db7-z67td -- nslookup kubernetes.io
E1003 18:21:36.173040   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-137840 -- exec busybox-7b57f96db7-z95dx -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-137840 -- exec busybox-7b57f96db7-z67td -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-137840 -- exec busybox-7b57f96db7-z95dx -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-137840 -- exec busybox-7b57f96db7-z67td -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-137840 -- exec busybox-7b57f96db7-z95dx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.97s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-137840 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-137840 -- exec busybox-7b57f96db7-z67td -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-137840 -- exec busybox-7b57f96db7-z67td -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-137840 -- exec busybox-7b57f96db7-z95dx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-137840 -- exec busybox-7b57f96db7-z95dx -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (46.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-137840 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-137840 -v=5 --alsologtostderr: (46.256414362s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (46.72s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-137840 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.48s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 cp testdata/cp-test.txt multinode-137840:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 ssh -n multinode-137840 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 cp multinode-137840:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile397205990/001/cp-test_multinode-137840.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 ssh -n multinode-137840 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 cp multinode-137840:/home/docker/cp-test.txt multinode-137840-m02:/home/docker/cp-test_multinode-137840_multinode-137840-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 ssh -n multinode-137840 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 ssh -n multinode-137840-m02 "sudo cat /home/docker/cp-test_multinode-137840_multinode-137840-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 cp multinode-137840:/home/docker/cp-test.txt multinode-137840-m03:/home/docker/cp-test_multinode-137840_multinode-137840-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 ssh -n multinode-137840 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 ssh -n multinode-137840-m03 "sudo cat /home/docker/cp-test_multinode-137840_multinode-137840-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 cp testdata/cp-test.txt multinode-137840-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 ssh -n multinode-137840-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 cp multinode-137840-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile397205990/001/cp-test_multinode-137840-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 ssh -n multinode-137840-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 cp multinode-137840-m02:/home/docker/cp-test.txt multinode-137840:/home/docker/cp-test_multinode-137840-m02_multinode-137840.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 ssh -n multinode-137840-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 ssh -n multinode-137840 "sudo cat /home/docker/cp-test_multinode-137840-m02_multinode-137840.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 cp multinode-137840-m02:/home/docker/cp-test.txt multinode-137840-m03:/home/docker/cp-test_multinode-137840-m02_multinode-137840-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 ssh -n multinode-137840-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 ssh -n multinode-137840-m03 "sudo cat /home/docker/cp-test_multinode-137840-m02_multinode-137840-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 cp testdata/cp-test.txt multinode-137840-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 ssh -n multinode-137840-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 cp multinode-137840-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile397205990/001/cp-test_multinode-137840-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 ssh -n multinode-137840-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 cp multinode-137840-m03:/home/docker/cp-test.txt multinode-137840:/home/docker/cp-test_multinode-137840-m03_multinode-137840.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 ssh -n multinode-137840-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 ssh -n multinode-137840 "sudo cat /home/docker/cp-test_multinode-137840-m03_multinode-137840.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 cp multinode-137840-m03:/home/docker/cp-test.txt multinode-137840-m02:/home/docker/cp-test_multinode-137840-m03_multinode-137840-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 ssh -n multinode-137840-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 ssh -n multinode-137840-m02 "sudo cat /home/docker/cp-test_multinode-137840-m03_multinode-137840-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.13s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-137840 node stop m03: (1.617715635s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-137840 status: exit status 7 (343.447487ms)

                                                
                                                
-- stdout --
	multinode-137840
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-137840-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-137840-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-137840 status --alsologtostderr: exit status 7 (330.987924ms)

                                                
                                                
-- stdout --
	multinode-137840
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-137840-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-137840-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:22:33.613607   31512 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:22:33.613872   31512 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:22:33.613880   31512 out.go:374] Setting ErrFile to fd 2...
	I1003 18:22:33.613884   31512 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:22:33.614054   31512 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8656/.minikube/bin
	I1003 18:22:33.614213   31512 out.go:368] Setting JSON to false
	I1003 18:22:33.614246   31512 mustload.go:65] Loading cluster: multinode-137840
	I1003 18:22:33.614299   31512 notify.go:220] Checking for updates...
	I1003 18:22:33.614771   31512 config.go:182] Loaded profile config "multinode-137840": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:22:33.614802   31512 status.go:174] checking status of multinode-137840 ...
	I1003 18:22:33.616927   31512 status.go:371] multinode-137840 host status = "Running" (err=<nil>)
	I1003 18:22:33.616941   31512 host.go:66] Checking if "multinode-137840" exists ...
	I1003 18:22:33.619246   31512 main.go:141] libmachine: domain multinode-137840 has defined MAC address 52:54:00:f6:95:33 in network mk-multinode-137840
	I1003 18:22:33.619646   31512 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:95:33", ip: ""} in network mk-multinode-137840: {Iface:virbr1 ExpiryTime:2025-10-03 19:20:03 +0000 UTC Type:0 Mac:52:54:00:f6:95:33 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-137840 Clientid:01:52:54:00:f6:95:33}
	I1003 18:22:33.619671   31512 main.go:141] libmachine: domain multinode-137840 has defined IP address 192.168.39.240 and MAC address 52:54:00:f6:95:33 in network mk-multinode-137840
	I1003 18:22:33.619806   31512 host.go:66] Checking if "multinode-137840" exists ...
	I1003 18:22:33.620094   31512 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:22:33.622502   31512 main.go:141] libmachine: domain multinode-137840 has defined MAC address 52:54:00:f6:95:33 in network mk-multinode-137840
	I1003 18:22:33.623079   31512 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:95:33", ip: ""} in network mk-multinode-137840: {Iface:virbr1 ExpiryTime:2025-10-03 19:20:03 +0000 UTC Type:0 Mac:52:54:00:f6:95:33 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-137840 Clientid:01:52:54:00:f6:95:33}
	I1003 18:22:33.623108   31512 main.go:141] libmachine: domain multinode-137840 has defined IP address 192.168.39.240 and MAC address 52:54:00:f6:95:33 in network mk-multinode-137840
	I1003 18:22:33.623258   31512 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/multinode-137840/id_rsa Username:docker}
	I1003 18:22:33.704922   31512 ssh_runner.go:195] Run: systemctl --version
	I1003 18:22:33.711655   31512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:22:33.729436   31512 kubeconfig.go:125] found "multinode-137840" server: "https://192.168.39.240:8443"
	I1003 18:22:33.729466   31512 api_server.go:166] Checking apiserver status ...
	I1003 18:22:33.729503   31512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:22:33.750146   31512 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1389/cgroup
	W1003 18:22:33.762890   31512 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1389/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:22:33.762948   31512 ssh_runner.go:195] Run: ls
	I1003 18:22:33.768721   31512 api_server.go:253] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I1003 18:22:33.773567   31512 api_server.go:279] https://192.168.39.240:8443/healthz returned 200:
	ok
	I1003 18:22:33.773607   31512 status.go:463] multinode-137840 apiserver status = Running (err=<nil>)
	I1003 18:22:33.773620   31512 status.go:176] multinode-137840 status: &{Name:multinode-137840 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 18:22:33.773653   31512 status.go:174] checking status of multinode-137840-m02 ...
	I1003 18:22:33.775529   31512 status.go:371] multinode-137840-m02 host status = "Running" (err=<nil>)
	I1003 18:22:33.775548   31512 host.go:66] Checking if "multinode-137840-m02" exists ...
	I1003 18:22:33.778771   31512 main.go:141] libmachine: domain multinode-137840-m02 has defined MAC address 52:54:00:cf:29:df in network mk-multinode-137840
	I1003 18:22:33.779191   31512 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cf:29:df", ip: ""} in network mk-multinode-137840: {Iface:virbr1 ExpiryTime:2025-10-03 19:21:00 +0000 UTC Type:0 Mac:52:54:00:cf:29:df Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-137840-m02 Clientid:01:52:54:00:cf:29:df}
	I1003 18:22:33.779216   31512 main.go:141] libmachine: domain multinode-137840-m02 has defined IP address 192.168.39.133 and MAC address 52:54:00:cf:29:df in network mk-multinode-137840
	I1003 18:22:33.779332   31512 host.go:66] Checking if "multinode-137840-m02" exists ...
	I1003 18:22:33.779539   31512 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:22:33.781631   31512 main.go:141] libmachine: domain multinode-137840-m02 has defined MAC address 52:54:00:cf:29:df in network mk-multinode-137840
	I1003 18:22:33.782176   31512 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cf:29:df", ip: ""} in network mk-multinode-137840: {Iface:virbr1 ExpiryTime:2025-10-03 19:21:00 +0000 UTC Type:0 Mac:52:54:00:cf:29:df Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-137840-m02 Clientid:01:52:54:00:cf:29:df}
	I1003 18:22:33.782202   31512 main.go:141] libmachine: domain multinode-137840-m02 has defined IP address 192.168.39.133 and MAC address 52:54:00:cf:29:df in network mk-multinode-137840
	I1003 18:22:33.782377   31512 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21625-8656/.minikube/machines/multinode-137840-m02/id_rsa Username:docker}
	I1003 18:22:33.867033   31512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:22:33.885361   31512 status.go:176] multinode-137840-m02 status: &{Name:multinode-137840-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1003 18:22:33.885401   31512 status.go:174] checking status of multinode-137840-m03 ...
	I1003 18:22:33.886977   31512 status.go:371] multinode-137840-m03 host status = "Stopped" (err=<nil>)
	I1003 18:22:33.887000   31512 status.go:384] host is not running, skipping remaining checks
	I1003 18:22:33.887007   31512 status.go:176] multinode-137840-m03 status: &{Name:multinode-137840-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (41.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-137840 node start m03 -v=5 --alsologtostderr: (40.846381518s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (41.37s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (303.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-137840
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-137840
E1003 18:24:39.247312   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:25:01.132578   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/functional-965419/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-137840: (2m51.556607975s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-137840 --wait=true -v=5 --alsologtostderr
E1003 18:26:36.173299   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-137840 --wait=true -v=5 --alsologtostderr: (2m11.355534423s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-137840
--- PASS: TestMultiNode/serial/RestartKeepsNodes (303.04s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-137840 node delete m03: (2.205612502s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.68s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (169.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 stop
E1003 18:30:01.128041   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/functional-965419/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-137840 stop: (2m49.15724511s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-137840 status: exit status 7 (63.740186ms)

                                                
                                                
-- stdout --
	multinode-137840
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-137840-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-137840 status --alsologtostderr: exit status 7 (62.779423ms)

                                                
                                                
-- stdout --
	multinode-137840
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-137840-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:31:10.264526   33899 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:31:10.264825   33899 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:31:10.264835   33899 out.go:374] Setting ErrFile to fd 2...
	I1003 18:31:10.264839   33899 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:31:10.265100   33899 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8656/.minikube/bin
	I1003 18:31:10.265326   33899 out.go:368] Setting JSON to false
	I1003 18:31:10.265363   33899 mustload.go:65] Loading cluster: multinode-137840
	I1003 18:31:10.265484   33899 notify.go:220] Checking for updates...
	I1003 18:31:10.265892   33899 config.go:182] Loaded profile config "multinode-137840": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:31:10.265910   33899 status.go:174] checking status of multinode-137840 ...
	I1003 18:31:10.267676   33899 status.go:371] multinode-137840 host status = "Stopped" (err=<nil>)
	I1003 18:31:10.267692   33899 status.go:384] host is not running, skipping remaining checks
	I1003 18:31:10.267697   33899 status.go:176] multinode-137840 status: &{Name:multinode-137840 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 18:31:10.267725   33899 status.go:174] checking status of multinode-137840-m02 ...
	I1003 18:31:10.269099   33899 status.go:371] multinode-137840-m02 host status = "Stopped" (err=<nil>)
	I1003 18:31:10.269116   33899 status.go:384] host is not running, skipping remaining checks
	I1003 18:31:10.269123   33899 status.go:176] multinode-137840-m02 status: &{Name:multinode-137840-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (169.28s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (85.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-137840 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1003 18:31:36.172471   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-137840 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m24.873331849s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-137840 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (85.35s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-137840
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-137840-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-137840-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (82.134064ms)

                                                
                                                
-- stdout --
	* [multinode-137840-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21625
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21625-8656/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8656/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-137840-m02' is duplicated with machine name 'multinode-137840-m02' in profile 'multinode-137840'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-137840-m03 --driver=kvm2  --container-runtime=crio
E1003 18:33:04.196957   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/functional-965419/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-137840-m03 --driver=kvm2  --container-runtime=crio: (41.991394491s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-137840
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-137840: exit status 80 (209.216776ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-137840 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-137840-m03 already exists in multinode-137840-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-137840-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.17s)

                                                
                                    
x
+
TestScheduledStopUnix (110.11s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-398259 --memory=3072 --driver=kvm2  --container-runtime=crio
E1003 18:36:36.177210   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-398259 --memory=3072 --driver=kvm2  --container-runtime=crio: (38.425606547s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-398259 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-398259 -n scheduled-stop-398259
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-398259 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1003 18:36:39.584114   12564 retry.go:31] will retry after 51.12µs: open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/scheduled-stop-398259/pid: no such file or directory
I1003 18:36:39.584260   12564 retry.go:31] will retry after 127.379µs: open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/scheduled-stop-398259/pid: no such file or directory
I1003 18:36:39.585383   12564 retry.go:31] will retry after 323.335µs: open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/scheduled-stop-398259/pid: no such file or directory
I1003 18:36:39.586531   12564 retry.go:31] will retry after 256.698µs: open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/scheduled-stop-398259/pid: no such file or directory
I1003 18:36:39.587692   12564 retry.go:31] will retry after 301.806µs: open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/scheduled-stop-398259/pid: no such file or directory
I1003 18:36:39.588848   12564 retry.go:31] will retry after 826.732µs: open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/scheduled-stop-398259/pid: no such file or directory
I1003 18:36:39.590010   12564 retry.go:31] will retry after 629.478µs: open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/scheduled-stop-398259/pid: no such file or directory
I1003 18:36:39.591189   12564 retry.go:31] will retry after 1.552528ms: open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/scheduled-stop-398259/pid: no such file or directory
I1003 18:36:39.593413   12564 retry.go:31] will retry after 3.610246ms: open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/scheduled-stop-398259/pid: no such file or directory
I1003 18:36:39.597658   12564 retry.go:31] will retry after 2.5001ms: open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/scheduled-stop-398259/pid: no such file or directory
I1003 18:36:39.600922   12564 retry.go:31] will retry after 6.933608ms: open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/scheduled-stop-398259/pid: no such file or directory
I1003 18:36:39.608259   12564 retry.go:31] will retry after 6.594214ms: open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/scheduled-stop-398259/pid: no such file or directory
I1003 18:36:39.615537   12564 retry.go:31] will retry after 11.175359ms: open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/scheduled-stop-398259/pid: no such file or directory
I1003 18:36:39.626972   12564 retry.go:31] will retry after 27.683004ms: open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/scheduled-stop-398259/pid: no such file or directory
I1003 18:36:39.655306   12564 retry.go:31] will retry after 35.657597ms: open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/scheduled-stop-398259/pid: no such file or directory
I1003 18:36:39.691689   12564 retry.go:31] will retry after 41.386657ms: open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/scheduled-stop-398259/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-398259 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-398259 -n scheduled-stop-398259
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-398259
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-398259 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-398259
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-398259: exit status 7 (61.534863ms)

                                                
                                                
-- stdout --
	scheduled-stop-398259
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-398259 -n scheduled-stop-398259
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-398259 -n scheduled-stop-398259: exit status 7 (63.069309ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-398259" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-398259
--- PASS: TestScheduledStopUnix (110.11s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (119.62s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1460204125 start -p running-upgrade-689902 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1460204125 start -p running-upgrade-689902 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m35.28198859s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-689902 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-689902 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (20.829057731s)
helpers_test.go:175: Cleaning up "running-upgrade-689902" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-689902
--- PASS: TestRunningBinaryUpgrade (119.62s)

                                                
                                    
x
+
TestKubernetesUpgrade (184.66s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-684417 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-684417 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (38.929070107s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-684417
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-684417: (2.193596541s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-684417 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-684417 status --format={{.Host}}: exit status 7 (91.142777ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-684417 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-684417 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m8.675720642s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-684417 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-684417 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-684417 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (84.059455ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-684417] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21625
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21625-8656/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8656/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-684417
	    minikube start -p kubernetes-upgrade-684417 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6844172 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-684417 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-684417 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-684417 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m13.665506723s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-684417" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-684417
--- PASS: TestKubernetesUpgrade (184.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-116959 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-116959 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (95.5278ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-116959] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21625
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21625-8656/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8656/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (78.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-116959 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-116959 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m18.058314252s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-116959 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (78.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (27.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-116959 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-116959 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (25.980006918s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-116959 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-116959 status -o json: exit status 2 (249.881106ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-116959","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-116959
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (27.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (99.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1317731198 start -p stopped-upgrade-071429 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1317731198 start -p stopped-upgrade-071429 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (38.960066579s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1317731198 -p stopped-upgrade-071429 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1317731198 -p stopped-upgrade-071429 stop: (1.69813983s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-071429 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-071429 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (58.814988319s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (99.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (40.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-116959 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-116959 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (40.680907863s)
--- PASS: TestNoKubernetes/serial/Start (40.68s)

                                                
                                    
x
+
TestPause/serial/Start (111.5s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-280494 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
E1003 18:40:01.128401   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/functional-965419/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-280494 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m51.4994643s)
--- PASS: TestPause/serial/Start (111.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-116959 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-116959 "sudo systemctl is-active --quiet service kubelet": exit status 1 (159.445093ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-116959
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-116959: (1.225756022s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (57.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-116959 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-116959 --driver=kvm2  --container-runtime=crio: (57.923031758s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (57.92s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-071429
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-071429: (1.11870344s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-116959 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-116959 "sudo systemctl is-active --quiet service kubelet": exit status 1 (177.708864ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-262954 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-262954 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (115.882966ms)

                                                
                                                
-- stdout --
	* [false-262954] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21625
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21625-8656/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8656/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:41:21.459189   40500 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:41:21.459420   40500 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:41:21.459429   40500 out.go:374] Setting ErrFile to fd 2...
	I1003 18:41:21.459433   40500 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:41:21.459718   40500 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-8656/.minikube/bin
	I1003 18:41:21.460271   40500 out.go:368] Setting JSON to false
	I1003 18:41:21.461212   40500 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5025,"bootTime":1759511856,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1003 18:41:21.461297   40500 start.go:140] virtualization: kvm guest
	I1003 18:41:21.463340   40500 out.go:179] * [false-262954] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1003 18:41:21.464842   40500 notify.go:220] Checking for updates...
	I1003 18:41:21.464890   40500 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:41:21.466646   40500 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:41:21.468416   40500 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-8656/kubeconfig
	I1003 18:41:21.469987   40500 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-8656/.minikube
	I1003 18:41:21.471585   40500 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1003 18:41:21.472973   40500 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:41:21.474943   40500 config.go:182] Loaded profile config "force-systemd-env-924555": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:41:21.475081   40500 config.go:182] Loaded profile config "force-systemd-flag-670426": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:41:21.475251   40500 config.go:182] Loaded profile config "pause-280494": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:41:21.475366   40500 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:41:21.508744   40500 out.go:179] * Using the kvm2 driver based on user configuration
	I1003 18:41:21.510216   40500 start.go:304] selected driver: kvm2
	I1003 18:41:21.510242   40500 start.go:924] validating driver "kvm2" against <nil>
	I1003 18:41:21.510262   40500 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:41:21.512882   40500 out.go:203] 
	W1003 18:41:21.514344   40500 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1003 18:41:21.515673   40500 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-262954 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-262954

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-262954

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-262954

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-262954

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-262954

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-262954

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-262954

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-262954

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-262954

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-262954

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-262954"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-262954"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-262954"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-262954

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-262954"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-262954"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-262954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-262954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-262954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-262954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-262954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-262954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-262954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-262954" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-262954"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-262954"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-262954"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-262954"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-262954"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-262954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-262954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-262954" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-262954"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-262954"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-262954"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-262954"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-262954"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21625-8656/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 03 Oct 2025 18:41:06 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.21:8443
name: pause-280494
contexts:
- context:
cluster: pause-280494
extensions:
- extension:
last-update: Fri, 03 Oct 2025 18:41:06 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-280494
name: pause-280494
current-context: ""
kind: Config
users:
- name: pause-280494
user:
client-certificate: /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/pause-280494/client.crt
client-key: /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/pause-280494/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-262954

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-262954"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-262954"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-262954"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-262954"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-262954"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-262954"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-262954"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-262954"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-262954"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-262954"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-262954"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-262954"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-262954"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-262954"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-262954"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-262954"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-262954"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-262954"

                                                
                                                
----------------------- debugLogs end: false-262954 [took: 4.052162398s] --------------------------------
helpers_test.go:175: Cleaning up "false-262954" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-262954
--- PASS: TestNetworkPlugins/group/false (4.37s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (73.97s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-280494 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-280494 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m13.938870961s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (73.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (116.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-614572 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-614572 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m56.460900734s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (116.46s)

                                                
                                    
x
+
TestPause/serial/Pause (0.94s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-280494 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.94s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.23s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-280494 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-280494 --output=json --layout=cluster: exit status 2 (231.013319ms)

                                                
                                                
-- stdout --
	{"Name":"pause-280494","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-280494","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.23s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-280494 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.67s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.92s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-280494 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.92s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.86s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-280494 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.86s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (1.09s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.093233316s)
--- PASS: TestPause/serial/VerifyDeletedResources (1.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (105s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-040049 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-040049 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m45.000434317s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (105.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (99.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-284434 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-284434 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m39.256404428s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (99.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-614572 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [42571582-2bc8-47a7-af79-ae1cf0c45009] Pending
helpers_test.go:352: "busybox" [42571582-2bc8-47a7-af79-ae1cf0c45009] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [42571582-2bc8-47a7-af79-ae1cf0c45009] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.003214517s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-614572 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-614572 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-614572 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.011050245s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-614572 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (77.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-614572 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-614572 --alsologtostderr -v=3: (1m17.114994525s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (77.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-284434 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [6f0c1bd4-ea04-4953-b792-3476ca6b2477] Pending
helpers_test.go:352: "busybox" [6f0c1bd4-ea04-4953-b792-3476ca6b2477] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [6f0c1bd4-ea04-4953-b792-3476ca6b2477] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.005301049s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-284434 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-040049 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [02d2fe79-dc52-47f4-83f7-6aa279607476] Pending
helpers_test.go:352: "busybox" [02d2fe79-dc52-47f4-83f7-6aa279607476] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [02d2fe79-dc52-47f4-83f7-6aa279607476] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004649238s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-040049 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-284434 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-284434 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (79.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-284434 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-284434 --alsologtostderr -v=3: (1m19.055139758s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (79.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-040049 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-040049 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (85.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-040049 --alsologtostderr -v=3
E1003 18:45:01.128511   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/functional-965419/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-040049 --alsologtostderr -v=3: (1m25.948983839s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (85.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-614572 -n old-k8s-version-614572
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-614572 -n old-k8s-version-614572: exit status 7 (61.530491ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-614572 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (45.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-614572 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-614572 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (44.905330444s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-614572 -n old-k8s-version-614572
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (45.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-284434 -n embed-certs-284434
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-284434 -n embed-certs-284434: exit status 7 (79.890405ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-284434 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (46.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-284434 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-284434 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (46.164367988s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-284434 -n embed-certs-284434
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (46.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (97.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-192202 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-192202 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m37.788651945s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (97.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-040049 -n no-preload-040049
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-040049 -n no-preload-040049: exit status 7 (62.667023ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-040049 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (89.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-040049 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-040049 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m29.273785848s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-040049 -n no-preload-040049
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (89.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (15.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-4g697" [2917f5c2-a45f-4469-a601-ce69c2638ce3] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1003 18:46:36.172645   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-4g697" [2917f5c2-a45f-4469-a601-ce69c2638ce3] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.005350743s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (15.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-4g697" [2917f5c2-a45f-4469-a601-ce69c2638ce3] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004979292s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-614572 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-614572 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-614572 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-614572 -n old-k8s-version-614572
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-614572 -n old-k8s-version-614572: exit status 2 (233.254395ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-614572 -n old-k8s-version-614572
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-614572 -n old-k8s-version-614572: exit status 2 (240.293005ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-614572 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-614572 -n old-k8s-version-614572
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-614572 -n old-k8s-version-614572
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (65.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-319795 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-319795 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m5.823700521s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (65.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gpv5j" [20832706-8055-44fd-9e6b-e78d7ccd5bff] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004154183s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gpv5j" [20832706-8055-44fd-9e6b-e78d7ccd5bff] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00516221s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-284434 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-284434 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-284434 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-284434 --alsologtostderr -v=1: (1.032208629s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-284434 -n embed-certs-284434
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-284434 -n embed-certs-284434: exit status 2 (272.897398ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-284434 -n embed-certs-284434
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-284434 -n embed-certs-284434: exit status 2 (258.997214ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-284434 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-284434 -n embed-certs-284434
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-284434 -n embed-certs-284434
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (106.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-262954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-262954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m46.025104781s)
--- PASS: TestNetworkPlugins/group/auto/Start (106.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vxcvf" [6179be3c-fa59-46dc-9042-7b3d0f724f00] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vxcvf" [6179be3c-fa59-46dc-9042-7b3d0f724f00] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.005111613s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-192202 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f60c4c2f-a6a4-45c3-bc59-995b2e5cfa65] Pending
helpers_test.go:352: "busybox" [f60c4c2f-a6a4-45c3-bc59-995b2e5cfa65] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f60c4c2f-a6a4-45c3-bc59-995b2e5cfa65] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.005089835s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-192202 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vxcvf" [6179be3c-fa59-46dc-9042-7b3d0f724f00] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00407272s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-040049 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-319795 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-319795 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.296686188s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (88.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-319795 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-319795 --alsologtostderr -v=3: (1m28.190954356s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (88.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-040049 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-040049 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-040049 -n no-preload-040049
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-040049 -n no-preload-040049: exit status 2 (254.528751ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-040049 -n no-preload-040049
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-040049 -n no-preload-040049: exit status 2 (228.29044ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-040049 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-040049 -n no-preload-040049
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-040049 -n no-preload-040049
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-192202 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-192202 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.058767221s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-192202 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (84.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-192202 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-192202 --alsologtostderr -v=3: (1m24.719928515s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (84.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (57.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-262954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-262954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (57.443720172s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (57.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-262954 "pgrep -a kubelet"
I1003 18:49:02.835495   12564 config.go:182] Loaded profile config "auto-262954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-262954 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-p9c4b" [43c1dd12-1d2b-44b0-8616-dfa116f5ad2e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-p9c4b" [43c1dd12-1d2b-44b0-8616-dfa116f5ad2e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004795582s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-x5xr2" [4216e3e4-c091-4997-b3bb-b67168a0134f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004421332s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-262954 "pgrep -a kubelet"
I1003 18:49:14.117775   12564 config.go:182] Loaded profile config "kindnet-262954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-262954 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vfkhz" [42f1eb60-fea5-4a6b-9a7b-6146fde907da] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-vfkhz" [42f1eb60-fea5-4a6b-9a7b-6146fde907da] Running
E1003 18:49:20.153980   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/old-k8s-version-614572/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004124053s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-262954 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-262954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-262954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-262954 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-262954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-262954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (73.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-262954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-262954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m13.918746541s)
--- PASS: TestNetworkPlugins/group/calico/Start (73.92s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-319795 -n newest-cni-319795
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-319795 -n newest-cni-319795: exit status 7 (68.195783ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-319795 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (52.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-319795 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-319795 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (52.331393533s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-319795 -n newest-cni-319795
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (52.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-192202 -n default-k8s-diff-port-192202
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-192202 -n default-k8s-diff-port-192202: exit status 7 (67.710699ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-192202 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (78.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-192202 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1003 18:49:35.517622   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/old-k8s-version-614572/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-192202 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m17.642314897s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-192202 -n default-k8s-diff-port-192202
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (78.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (117.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-262954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E1003 18:49:44.198640   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/functional-965419/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:49:46.078068   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/no-preload-040049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:49:46.084619   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/no-preload-040049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:49:46.096171   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/no-preload-040049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:49:46.117668   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/no-preload-040049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:49:46.159133   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/no-preload-040049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:49:46.240572   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/no-preload-040049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:49:46.402678   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/no-preload-040049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:49:46.724963   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/no-preload-040049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:49:47.367209   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/no-preload-040049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:49:48.649017   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/no-preload-040049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:49:51.211216   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/no-preload-040049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:49:55.999465   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/old-k8s-version-614572/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:49:56.332582   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/no-preload-040049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:50:01.129066   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/functional-965419/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:50:06.574033   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/no-preload-040049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-262954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m57.478509524s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (117.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-319795 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-319795 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-319795 -n newest-cni-319795
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-319795 -n newest-cni-319795: exit status 2 (259.669379ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-319795 -n newest-cni-319795
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-319795 -n newest-cni-319795: exit status 2 (261.944396ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-319795 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-319795 -n newest-cni-319795
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-319795 -n newest-cni-319795
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (74.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-262954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E1003 18:50:36.961391   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/old-k8s-version-614572/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-262954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m14.392945962s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (74.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-2crnf" [204bc08e-15cf-47a5-a71d-43ba1809b471] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-2crnf" [204bc08e-15cf-47a5-a71d-43ba1809b471] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.009648271s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-262954 "pgrep -a kubelet"
I1003 18:50:48.396992   12564 config.go:182] Loaded profile config "calico-262954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (15.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-262954 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context calico-262954 replace --force -f testdata/netcat-deployment.yaml: (1.999653166s)
I1003 18:50:51.302456   12564 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hjw8s" [f52f1999-068b-4d71-8716-3896f2ab37c5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hjw8s" [f52f1999-068b-4d71-8716-3896f2ab37c5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.005081298s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (15.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-d297x" [f38b0fe7-09d2-4b35-854e-2a2b5d8d33e9] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00434832s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-d297x" [f38b0fe7-09d2-4b35-854e-2a2b5d8d33e9] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005125907s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-192202 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-262954 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-262954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-262954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-192202 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-192202 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-192202 --alsologtostderr -v=1: (1.057278152s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-192202 -n default-k8s-diff-port-192202
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-192202 -n default-k8s-diff-port-192202: exit status 2 (260.923829ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-192202 -n default-k8s-diff-port-192202
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-192202 -n default-k8s-diff-port-192202: exit status 2 (250.361514ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-192202 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-192202 -n default-k8s-diff-port-192202
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-192202 -n default-k8s-diff-port-192202
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (79.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-262954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-262954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m19.099826521s)
--- PASS: TestNetworkPlugins/group/flannel/Start (79.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (92.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-262954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E1003 18:51:36.173322   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/addons-925003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-262954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m32.36585388s)
--- PASS: TestNetworkPlugins/group/bridge/Start (92.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-262954 "pgrep -a kubelet"
I1003 18:51:36.975232   12564 config.go:182] Loaded profile config "custom-flannel-262954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-262954 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zh655" [5d4db5d2-8624-41aa-b4db-9877088b0b6f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-zh655" [5d4db5d2-8624-41aa-b4db-9877088b0b6f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.004480754s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-262954 "pgrep -a kubelet"
I1003 18:51:41.995954   12564 config.go:182] Loaded profile config "enable-default-cni-262954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-262954 replace --force -f testdata/netcat-deployment.yaml
I1003 18:51:42.900435   12564 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1003 18:51:42.939197   12564 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2x2hw" [0aca9f88-df69-4892-ac31-3d75edf45c3e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2x2hw" [0aca9f88-df69-4892-ac31-3d75edf45c3e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.005675703s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-262954 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-262954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-262954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-262954 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-262954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-262954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-hzbmv" [91450057-9082-498a-bcf6-748219a4a99e] Running
E1003 18:52:29.938538   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/no-preload-040049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004818535s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-262954 "pgrep -a kubelet"
I1003 18:52:34.382014   12564 config.go:182] Loaded profile config "flannel-262954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-262954 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-c4h84" [baa4a3e0-df54-49e3-bf58-d1789bcda6eb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-c4h84" [baa4a3e0-df54-49e3-bf58-d1789bcda6eb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004949126s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-262954 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-262954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-262954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-262954 "pgrep -a kubelet"
I1003 18:52:54.117537   12564 config.go:182] Loaded profile config "bridge-262954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-262954 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-wb444" [eaf248d7-9465-43b9-a312-a54fc53f485f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1003 18:52:57.431576   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/default-k8s-diff-port-192202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:52:57.438025   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/default-k8s-diff-port-192202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:52:57.449446   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/default-k8s-diff-port-192202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:52:57.471034   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/default-k8s-diff-port-192202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:52:57.512501   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/default-k8s-diff-port-192202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:52:57.593946   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/default-k8s-diff-port-192202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:52:57.756158   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/default-k8s-diff-port-192202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-wb444" [eaf248d7-9465-43b9-a312-a54fc53f485f] Running
E1003 18:53:00.001903   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/default-k8s-diff-port-192202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:53:02.564208   12564 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/default-k8s-diff-port-192202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.005028752s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-262954 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-262954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-262954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (40/329)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.3
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.02
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
123 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
125 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
126 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
206 TestKicCustomNetwork 0
207 TestKicExistingNetwork 0
208 TestKicCustomSubnet 0
209 TestKicStaticIP 0
241 TestChangeNoneUser 0
244 TestScheduledStopWindows 0
246 TestSkaffold 0
248 TestInsufficientStorage 0
252 TestMissingContainerUpgrade 0
265 TestStartStop/group/disable-driver-mounts 0.17
277 TestNetworkPlugins/group/kubenet 3.73
285 TestNetworkPlugins/group/cilium 4.17
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-925003 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-986011" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-986011
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-262954 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-262954

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-262954

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-262954

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-262954

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-262954

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-262954

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-262954

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-262954

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-262954

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-262954

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-262954"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-262954"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-262954"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-262954

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-262954"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-262954"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-262954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-262954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-262954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-262954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-262954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-262954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-262954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-262954" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-262954"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-262954"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-262954"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-262954"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-262954"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-262954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-262954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-262954" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-262954"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-262954"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-262954"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-262954"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-262954"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21625-8656/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 03 Oct 2025 18:41:06 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.21:8443
name: pause-280494
contexts:
- context:
cluster: pause-280494
extensions:
- extension:
last-update: Fri, 03 Oct 2025 18:41:06 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-280494
name: pause-280494
current-context: ""
kind: Config
users:
- name: pause-280494
user:
client-certificate: /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/pause-280494/client.crt
client-key: /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/pause-280494/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-262954

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-262954"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-262954"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-262954"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-262954"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-262954"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-262954"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-262954"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-262954"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-262954"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-262954"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-262954"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-262954"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-262954"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-262954"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-262954"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-262954"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-262954"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-262954"

                                                
                                                
----------------------- debugLogs end: kubenet-262954 [took: 3.564383307s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-262954" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-262954
--- SKIP: TestNetworkPlugins/group/kubenet (3.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-262954 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-262954

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-262954

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-262954

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-262954

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-262954

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-262954

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-262954

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-262954

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-262954

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-262954

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262954"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262954"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262954"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-262954

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262954"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262954"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-262954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-262954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-262954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-262954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-262954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-262954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-262954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-262954" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262954"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262954"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262954"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262954"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262954"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-262954

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-262954

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-262954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-262954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-262954

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-262954

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-262954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-262954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-262954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-262954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-262954" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262954"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262954"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262954"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262954"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262954"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21625-8656/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 03 Oct 2025 18:41:06 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.21:8443
name: pause-280494
contexts:
- context:
cluster: pause-280494
extensions:
- extension:
last-update: Fri, 03 Oct 2025 18:41:06 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-280494
name: pause-280494
current-context: ""
kind: Config
users:
- name: pause-280494
user:
client-certificate: /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/pause-280494/client.crt
client-key: /home/jenkins/minikube-integration/21625-8656/.minikube/profiles/pause-280494/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-262954

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262954"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262954"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262954"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262954"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262954"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262954"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262954"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262954"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262954"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262954"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262954"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262954"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262954"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262954"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262954"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262954"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262954"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-262954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262954"

                                                
                                                
----------------------- debugLogs end: cilium-262954 [took: 4.008258359s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-262954" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-262954
--- SKIP: TestNetworkPlugins/group/cilium (4.17s)

                                                
                                    
Copied to clipboard